desh2608 commited on
Commit
06747da
β€’
1 Parent(s): 217ffb9

update with averaged model

Browse files
This view is limited to 50 files because it contains too many changes. Β  See raw diff
Files changed (50) hide show
  1. README.md +8 -29
  2. exp/cpu_jit.pt +1 -1
  3. exp/pretrained.pt +1 -1
  4. log/fast_beam_search/{cers-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ cers-dev_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} +0 -0
  5. log/fast_beam_search/{cers-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ cers-dev_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} +0 -0
  6. log/fast_beam_search/{cers-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ cers-dev_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} +0 -0
  7. log/fast_beam_search/{cers-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ cers-test_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} +0 -0
  8. log/fast_beam_search/{cers-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ cers-test_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} +0 -0
  9. log/fast_beam_search/{cers-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ cers-test_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} +0 -0
  10. log/fast_beam_search/log-decode-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8-2022-11-21-07-23-44 +366 -0
  11. log/fast_beam_search/log-decode-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8-2022-11-21-09-37-32 +376 -0
  12. log/fast_beam_search/log-decode-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8-2022-11-19-11-56-30 +0 -381
  13. log/fast_beam_search/{recogs-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ recogs-dev_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} +0 -0
  14. log/fast_beam_search/{recogs-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ recogs-dev_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} +0 -0
  15. log/fast_beam_search/{recogs-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ recogs-dev_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} +0 -0
  16. log/fast_beam_search/{recogs-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ recogs-test_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} +0 -0
  17. log/fast_beam_search/{recogs-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ recogs-test_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} +0 -0
  18. log/fast_beam_search/{recogs-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ recogs-test_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} +0 -0
  19. log/fast_beam_search/wer-summary-dev_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt +2 -0
  20. log/fast_beam_search/wer-summary-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -2
  21. log/fast_beam_search/wer-summary-dev_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt +2 -0
  22. log/fast_beam_search/wer-summary-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -2
  23. log/fast_beam_search/wer-summary-dev_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt +2 -0
  24. log/fast_beam_search/wer-summary-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -2
  25. log/fast_beam_search/wer-summary-test_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt +2 -0
  26. log/fast_beam_search/wer-summary-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -2
  27. log/fast_beam_search/wer-summary-test_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt +2 -0
  28. log/fast_beam_search/wer-summary-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -2
  29. log/fast_beam_search/wer-summary-test_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt +2 -0
  30. log/fast_beam_search/wer-summary-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -2
  31. log/fast_beam_search/{wers-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ wers-dev_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} +0 -0
  32. log/fast_beam_search/{wers-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ wers-dev_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} +0 -0
  33. log/fast_beam_search/{wers-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ wers-dev_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} +0 -0
  34. log/fast_beam_search/{wers-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ wers-test_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} +0 -0
  35. log/fast_beam_search/{wers-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ wers-test_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} +0 -0
  36. log/fast_beam_search/{wers-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ wers-test_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} +0 -0
  37. log/greedy_search/cers-dev_gss-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt +0 -0
  38. log/greedy_search/cers-dev_ihm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt +0 -0
  39. log/greedy_search/cers-dev_sdm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt +0 -0
  40. log/greedy_search/cers-test_gss-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt +0 -0
  41. log/greedy_search/cers-test_ihm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt +0 -0
  42. log/greedy_search/cers-test_sdm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt +0 -0
  43. log/greedy_search/log-decode-epoch-14-avg-8-context-2-max-sym-per-frame-1-2022-11-21-08-54-32 +109 -0
  44. log/greedy_search/recogs-dev_gss-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt +0 -0
  45. log/greedy_search/recogs-dev_ihm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt +0 -0
  46. log/greedy_search/recogs-dev_sdm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt +0 -0
  47. log/greedy_search/recogs-test_gss-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt +0 -0
  48. log/greedy_search/recogs-test_ihm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt +0 -0
  49. log/greedy_search/recogs-test_sdm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt +0 -0
  50. log/greedy_search/wer-summary-dev_gss-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt +2 -0
README.md CHANGED
@@ -8,15 +8,15 @@ metrics:
8
  -
9
  name: "IHM test WER"
10
  type: wer
11
- value: 18.06
12
  -
13
  name: "SDM test WER"
14
  type: wer
15
- value: 32.61
16
  -
17
  name: "GSS test WER"
18
  type: wer
19
- value: 23.03
20
  tags:
21
  - k2
22
  - icefall
@@ -34,28 +34,7 @@ We pool data in the following 4 ways and train a single model on the pooled data
34
  (iv) GSS-enhanced array microphones
35
 
36
  Speed perturbation and MUSAN noise augmentation are additionally performed on the pooled
37
- data. Here are the statistics of the combined training data:
38
-
39
- ```python
40
- >>> cuts_train.describe()
41
- Cuts count: 1222053
42
- Total duration (hh:mm:ss): 905:00:28
43
- Speech duration (hh:mm:ss): 905:00:28 (99.9%)
44
- Duration statistics (seconds):
45
- mean 2.7
46
- std 2.8
47
- min 0.0
48
- 25% 0.6
49
- 50% 1.6
50
- 75% 3.8
51
- 99% 12.3
52
- 99.5% 13.9
53
- 99.9% 18.4
54
- max 36.8
55
- ```
56
-
57
- **Note:** This recipe additionally uses [GSS](https://github.com/desh2608/gss) for enhancement
58
- of far-field array microphones, but this is optional (see `prepare.sh` for details).
59
 
60
  ## Performance Record
61
 
@@ -65,8 +44,8 @@ The following are decoded using `modified_beam_search`:
65
 
66
  | Evaluation set | dev WER | test WER |
67
  |--------------------------|------------|---------|
68
- | IHM | 19.23 | 18.06 |
69
- | SDM | 31.16 | 32.61 |
70
- | MDM (GSS-enhanced) | 22.08 | 23.03 |
71
 
72
- See [RESULTS](/egs/ami/ASR/RESULTS.md) for details.
 
8
  -
9
  name: "IHM test WER"
10
  type: wer
11
+ value: 17.40
12
  -
13
  name: "SDM test WER"
14
  type: wer
15
+ value: 32.21
16
  -
17
  name: "GSS test WER"
18
  type: wer
19
+ value: 22.43
20
  tags:
21
  - k2
22
  - icefall
 
34
  (iv) GSS-enhanced array microphones
35
 
36
  Speed perturbation and MUSAN noise augmentation are additionally performed on the pooled
37
+ data.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
  ## Performance Record
40
 
 
44
 
45
  | Evaluation set | dev WER | test WER |
46
  |--------------------------|------------|---------|
47
+ | IHM | 18.92 | 17.40 |
48
+ | SDM | 31.25 | 32.21 |
49
+ | MDM (GSS-enhanced) | 21.67 | 22.43 |
50
 
51
+ See the [recipe](https://github.com/k2-fsa/icefall/tree/master/egs) for details.
exp/cpu_jit.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:838da7cac3c8a849c7f7e94fa7b334a4195fb96d075406939ced1cc50bfaffe1
3
  size 281740990
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6f9b562a46f21bb5c67b3d045d84a90019ad89b60d9276e796b62ef63a3b5a1
3
  size 281740990
exp/pretrained.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3017e7a9b374c004224288e50ea5456e0c5481dd7a442d88c6f358c3fa8e5de3
3
  size 281766253
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c21981d08607c2c85aa7b567cf36bfe07ae2aa23f11a2c446cfa46ec6858419
3
  size 281766253
log/fast_beam_search/{cers-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ cers-dev_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} RENAMED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/{cers-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ cers-dev_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} RENAMED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/{cers-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ cers-dev_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} RENAMED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/{cers-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ cers-test_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} RENAMED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/{cers-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ cers-test_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} RENAMED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/{cers-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ cers-test_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} RENAMED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/log-decode-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8-2022-11-21-07-23-44 ADDED
@@ -0,0 +1,366 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-11-21 07:23:44,328 INFO [decode.py:574] Decoding started
2
+ 2022-11-21 07:23:44,329 INFO [decode.py:580] Device: cuda:0
3
+ 2022-11-21 07:23:44,335 INFO [decode.py:590] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 100, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.21', 'k2-build-type': 'Debug', 'k2-with-cuda': True, 'k2-git-sha1': 'f271e82ef30f75fecbae44b163e1244e53def116', 'k2-git-date': 'Fri Oct 28 05:02:16 2022', 'lhotse-version': '1.9.0.dev+git.97bf4b0.dirty', 'torch-version': '1.10.0+cu111', 'torch-cuda-available': True, 'torch-cuda-version': '11.1', 'python-version': '3.8', 'icefall-git-branch': 'ami_recipe', 'icefall-git-sha1': 'd1b5a16-clean', 'icefall-git-date': 'Sun Nov 20 22:32:57 2022', 'icefall-path': '/exp/draj/mini_scale_2022/icefall', 'k2-path': '/exp/draj/mini_scale_2022/k2/k2/python/k2/__init__.py', 'lhotse-path': '/exp/draj/mini_scale_2022/lhotse/lhotse/__init__.py', 'hostname': 'r3n07', 'IP address': '10.1.3.7'}, 'epoch': 14, 'iter': 0, 'avg': 8, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7/exp/v2'), 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 4, 'ngram_lm_scale': 0.01, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'manifest_dir': PosixPath('data/manifests'), 'enable_musan': True, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'max_duration': 500, 'max_cuts': None, 'num_buckets': 50, 'on_the_fly_feats': False, 'shuffle': True, 'num_workers': 8, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'ihm_only': False, 'res_dir': PosixPath('pruned_transducer_stateless7/exp/v2/fast_beam_search'), 'suffix': 'epoch-14-avg-8-beam-4-max-contexts-4-max-states-8', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
4
+ 2022-11-21 07:23:44,335 INFO [decode.py:592] About to create model
5
+ 2022-11-21 07:23:44,944 INFO [zipformer.py:179] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
6
+ 2022-11-21 07:23:44,954 INFO [decode.py:659] Calculating the averaged model over epoch range from 6 (excluded) to 14
7
+ 2022-11-21 07:24:19,994 INFO [decode.py:694] Number of model parameters: 70369391
8
+ 2022-11-21 07:24:19,995 INFO [asr_datamodule.py:392] About to get AMI IHM dev cuts
9
+ 2022-11-21 07:24:20,001 INFO [asr_datamodule.py:413] About to get AMI IHM test cuts
10
+ 2022-11-21 07:24:20,003 INFO [asr_datamodule.py:398] About to get AMI SDM dev cuts
11
+ 2022-11-21 07:24:20,005 INFO [asr_datamodule.py:419] About to get AMI SDM test cuts
12
+ 2022-11-21 07:24:20,006 INFO [asr_datamodule.py:407] About to get AMI GSS-enhanced dev cuts
13
+ 2022-11-21 07:24:20,008 INFO [asr_datamodule.py:428] About to get AMI GSS-enhanced test cuts
14
+ 2022-11-21 07:24:22,027 INFO [decode.py:726] Decoding dev_ihm
15
+ 2022-11-21 07:24:25,205 INFO [decode.py:469] batch 0/?, cuts processed until now is 72
16
+ 2022-11-21 07:24:27,591 INFO [decode.py:469] batch 2/?, cuts processed until now is 537
17
+ 2022-11-21 07:24:30,197 INFO [decode.py:469] batch 4/?, cuts processed until now is 689
18
+ 2022-11-21 07:24:32,818 INFO [decode.py:469] batch 6/?, cuts processed until now is 823
19
+ 2022-11-21 07:24:35,376 INFO [decode.py:469] batch 8/?, cuts processed until now is 985
20
+ 2022-11-21 07:24:39,599 INFO [decode.py:469] batch 10/?, cuts processed until now is 1088
21
+ 2022-11-21 07:24:42,143 INFO [decode.py:469] batch 12/?, cuts processed until now is 1263
22
+ 2022-11-21 07:24:44,281 INFO [decode.py:469] batch 14/?, cuts processed until now is 1521
23
+ 2022-11-21 07:24:46,328 INFO [decode.py:469] batch 16/?, cuts processed until now is 1903
24
+ 2022-11-21 07:24:49,625 INFO [decode.py:469] batch 18/?, cuts processed until now is 2032
25
+ 2022-11-21 07:24:52,952 INFO [decode.py:469] batch 20/?, cuts processed until now is 2117
26
+ 2022-11-21 07:24:55,096 INFO [decode.py:469] batch 22/?, cuts processed until now is 2375
27
+ 2022-11-21 07:24:57,164 INFO [decode.py:469] batch 24/?, cuts processed until now is 2824
28
+ 2022-11-21 07:24:59,771 INFO [decode.py:469] batch 26/?, cuts processed until now is 2969
29
+ 2022-11-21 07:25:02,174 INFO [decode.py:469] batch 28/?, cuts processed until now is 3245
30
+ 2022-11-21 07:25:04,637 INFO [decode.py:469] batch 30/?, cuts processed until now is 3401
31
+ 2022-11-21 07:25:07,830 INFO [decode.py:469] batch 32/?, cuts processed until now is 3519
32
+ 2022-11-21 07:25:10,391 INFO [decode.py:469] batch 34/?, cuts processed until now is 3694
33
+ 2022-11-21 07:25:13,058 INFO [decode.py:469] batch 36/?, cuts processed until now is 3818
34
+ 2022-11-21 07:25:15,685 INFO [decode.py:469] batch 38/?, cuts processed until now is 3970
35
+ 2022-11-21 07:25:17,743 INFO [decode.py:469] batch 40/?, cuts processed until now is 4750
36
+ 2022-11-21 07:25:20,098 INFO [decode.py:469] batch 42/?, cuts processed until now is 5038
37
+ 2022-11-21 07:25:23,210 INFO [decode.py:469] batch 44/?, cuts processed until now is 5144
38
+ 2022-11-21 07:25:26,314 INFO [decode.py:469] batch 46/?, cuts processed until now is 5253
39
+ 2022-11-21 07:25:29,565 INFO [decode.py:469] batch 48/?, cuts processed until now is 5672
40
+ 2022-11-21 07:25:29,826 INFO [zipformer.py:1414] attn_weights_entropy = tensor([2.8204, 2.7644, 2.5354, 2.9453, 2.4156, 2.6087, 2.6265, 3.2272],
41
+ device='cuda:0'), covar=tensor([0.1307, 0.2458, 0.3112, 0.2146, 0.2234, 0.1893, 0.2297, 0.2367],
42
+ device='cuda:0'), in_proj_covar=tensor([0.0085, 0.0085, 0.0091, 0.0079, 0.0076, 0.0081, 0.0082, 0.0061],
43
+ device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002],
44
+ device='cuda:0')
45
+ 2022-11-21 07:25:31,873 INFO [decode.py:469] batch 50/?, cuts processed until now is 5878
46
+ 2022-11-21 07:25:33,840 INFO [decode.py:469] batch 52/?, cuts processed until now is 6260
47
+ 2022-11-21 07:25:35,858 INFO [decode.py:469] batch 54/?, cuts processed until now is 6808
48
+ 2022-11-21 07:25:38,267 INFO [decode.py:469] batch 56/?, cuts processed until now is 7117
49
+ 2022-11-21 07:25:40,248 INFO [decode.py:469] batch 58/?, cuts processed until now is 7565
50
+ 2022-11-21 07:25:42,301 INFO [decode.py:469] batch 60/?, cuts processed until now is 8078
51
+ 2022-11-21 07:25:44,330 INFO [decode.py:469] batch 62/?, cuts processed until now is 8626
52
+ 2022-11-21 07:25:46,604 INFO [decode.py:469] batch 64/?, cuts processed until now is 9174
53
+ 2022-11-21 07:25:48,737 INFO [decode.py:469] batch 66/?, cuts processed until now is 9455
54
+ 2022-11-21 07:25:50,787 INFO [decode.py:469] batch 68/?, cuts processed until now is 9968
55
+ 2022-11-21 07:25:52,859 INFO [decode.py:469] batch 70/?, cuts processed until now is 10481
56
+ 2022-11-21 07:25:54,965 INFO [decode.py:469] batch 72/?, cuts processed until now is 11264
57
+ 2022-11-21 07:25:56,861 INFO [decode.py:469] batch 74/?, cuts processed until now is 11669
58
+ 2022-11-21 07:25:58,451 INFO [decode.py:469] batch 76/?, cuts processed until now is 11761
59
+ 2022-11-21 07:26:00,024 INFO [decode.py:469] batch 78/?, cuts processed until now is 11843
60
+ 2022-11-21 07:26:01,765 INFO [decode.py:469] batch 80/?, cuts processed until now is 11956
61
+ 2022-11-21 07:26:03,121 INFO [decode.py:469] batch 82/?, cuts processed until now is 12467
62
+ 2022-11-21 07:26:07,061 INFO [decode.py:469] batch 84/?, cuts processed until now is 12586
63
+ 2022-11-21 07:26:08,969 INFO [decode.py:485] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-dev_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
64
+ 2022-11-21 07:26:09,125 INFO [utils.py:530] [dev_ihm-beam_4_max_contexts_4_max_states_8] %WER 19.44% [18459 / 94940, 2783 ins, 3992 del, 11684 sub ]
65
+ 2022-11-21 07:26:09,814 INFO [utils.py:530] [dev_ihm-beam_4_max_contexts_4_max_states_8] %WER 12.30% [45497 / 369873, 10772 ins, 17562 del, 17163 sub ]
66
+ 2022-11-21 07:26:10,877 INFO [decode.py:511] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-dev_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
67
+ 2022-11-21 07:26:10,885 INFO [decode.py:531]
68
+ For dev_ihm, WER/CER of different settings are:
69
+ beam_4_max_contexts_4_max_states_8 19.44 12.3 best for dev_ihm
70
+
71
+ 2022-11-21 07:26:10,897 INFO [decode.py:726] Decoding test_ihm
72
+ 2022-11-21 07:26:14,017 INFO [decode.py:469] batch 0/?, cuts processed until now is 69
73
+ 2022-11-21 07:26:16,531 INFO [decode.py:469] batch 2/?, cuts processed until now is 555
74
+ 2022-11-21 07:26:19,207 INFO [decode.py:469] batch 4/?, cuts processed until now is 703
75
+ 2022-11-21 07:26:21,907 INFO [decode.py:469] batch 6/?, cuts processed until now is 830
76
+ 2022-11-21 07:26:24,538 INFO [decode.py:469] batch 8/?, cuts processed until now is 987
77
+ 2022-11-21 07:26:28,097 INFO [decode.py:469] batch 10/?, cuts processed until now is 1095
78
+ 2022-11-21 07:26:30,701 INFO [decode.py:469] batch 12/?, cuts processed until now is 1267
79
+ 2022-11-21 07:26:32,859 INFO [decode.py:469] batch 14/?, cuts processed until now is 1532
80
+ 2022-11-21 07:26:34,925 INFO [decode.py:469] batch 16/?, cuts processed until now is 1931
81
+ 2022-11-21 07:26:38,720 INFO [decode.py:469] batch 18/?, cuts processed until now is 2055
82
+ 2022-11-21 07:26:42,970 INFO [decode.py:469] batch 20/?, cuts processed until now is 2124
83
+ 2022-11-21 07:26:45,358 INFO [decode.py:469] batch 22/?, cuts processed until now is 2388
84
+ 2022-11-21 07:26:47,447 INFO [decode.py:469] batch 24/?, cuts processed until now is 2856
85
+ 2022-11-21 07:26:50,554 INFO [decode.py:469] batch 26/?, cuts processed until now is 2996
86
+ 2022-11-21 07:26:53,323 INFO [decode.py:469] batch 28/?, cuts processed until now is 3278
87
+ 2022-11-21 07:26:55,819 INFO [decode.py:469] batch 30/?, cuts processed until now is 3430
88
+ 2022-11-21 07:26:59,582 INFO [decode.py:469] batch 32/?, cuts processed until now is 3535
89
+ 2022-11-21 07:27:02,557 INFO [decode.py:469] batch 34/?, cuts processed until now is 3706
90
+ 2022-11-21 07:27:05,346 INFO [decode.py:469] batch 36/?, cuts processed until now is 3822
91
+ 2022-11-21 07:27:08,027 INFO [decode.py:469] batch 38/?, cuts processed until now is 3969
92
+ 2022-11-21 07:27:11,470 INFO [decode.py:469] batch 40/?, cuts processed until now is 4411
93
+ 2022-11-21 07:27:11,769 INFO [zipformer.py:1414] attn_weights_entropy = tensor([1.9139, 2.2858, 1.8880, 1.5129, 1.7399, 2.4293, 2.5477, 2.5261],
94
+ device='cuda:0'), covar=tensor([0.0995, 0.0696, 0.1996, 0.1805, 0.0938, 0.0902, 0.0388, 0.0921],
95
+ device='cuda:0'), in_proj_covar=tensor([0.0155, 0.0166, 0.0141, 0.0170, 0.0150, 0.0167, 0.0133, 0.0161],
96
+ device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003],
97
+ device='cuda:0')
98
+ 2022-11-21 07:27:13,771 INFO [decode.py:469] batch 42/?, cuts processed until now is 5058
99
+ 2022-11-21 07:27:15,998 INFO [decode.py:469] batch 44/?, cuts processed until now is 5544
100
+ 2022-11-21 07:27:18,607 INFO [decode.py:469] batch 46/?, cuts processed until now is 5685
101
+ 2022-11-21 07:27:20,960 INFO [decode.py:469] batch 48/?, cuts processed until now is 5890
102
+ 2022-11-21 07:27:23,447 INFO [decode.py:469] batch 50/?, cuts processed until now is 6372
103
+ 2022-11-21 07:27:25,569 INFO [decode.py:469] batch 52/?, cuts processed until now is 6706
104
+ 2022-11-21 07:27:27,547 INFO [decode.py:469] batch 54/?, cuts processed until now is 7105
105
+ 2022-11-21 07:27:31,263 INFO [decode.py:469] batch 56/?, cuts processed until now is 7290
106
+ 2022-11-21 07:27:33,635 INFO [decode.py:469] batch 58/?, cuts processed until now is 8116
107
+ 2022-11-21 07:27:37,667 INFO [decode.py:469] batch 60/?, cuts processed until now is 8258
108
+ 2022-11-21 07:27:39,712 INFO [decode.py:469] batch 62/?, cuts processed until now is 8794
109
+ 2022-11-21 07:27:41,794 INFO [decode.py:469] batch 64/?, cuts processed until now is 9330
110
+ 2022-11-21 07:27:45,450 INFO [decode.py:469] batch 66/?, cuts processed until now is 9476
111
+ 2022-11-21 07:27:48,517 INFO [decode.py:469] batch 68/?, cuts processed until now is 9921
112
+ 2022-11-21 07:27:50,540 INFO [decode.py:469] batch 70/?, cuts processed until now is 10251
113
+ 2022-11-21 07:27:53,295 INFO [decode.py:469] batch 72/?, cuts processed until now is 10679
114
+ 2022-11-21 07:27:55,643 INFO [decode.py:469] batch 74/?, cuts processed until now is 10794
115
+ 2022-11-21 07:27:57,283 INFO [decode.py:469] batch 76/?, cuts processed until now is 11039
116
+ 2022-11-21 07:27:58,317 INFO [decode.py:469] batch 78/?, cuts processed until now is 11155
117
+ 2022-11-21 07:27:59,870 INFO [decode.py:469] batch 80/?, cuts processed until now is 11600
118
+ 2022-11-21 07:28:02,120 INFO [decode.py:469] batch 82/?, cuts processed until now is 12041
119
+ 2022-11-21 07:28:03,513 INFO [decode.py:469] batch 84/?, cuts processed until now is 12110
120
+ 2022-11-21 07:28:03,713 INFO [decode.py:485] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-test_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
121
+ 2022-11-21 07:28:03,850 INFO [utils.py:530] [test_ihm-beam_4_max_contexts_4_max_states_8] %WER 18.04% [16174 / 89659, 1994 ins, 4043 del, 10137 sub ]
122
+ 2022-11-21 07:28:04,562 INFO [utils.py:530] [test_ihm-beam_4_max_contexts_4_max_states_8] %WER 11.30% [40040 / 354205, 8698 ins, 16856 del, 14486 sub ]
123
+ 2022-11-21 07:28:05,419 INFO [decode.py:511] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-test_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
124
+ 2022-11-21 07:28:05,420 INFO [decode.py:531]
125
+ For test_ihm, WER/CER of different settings are:
126
+ beam_4_max_contexts_4_max_states_8 18.04 11.3 best for test_ihm
127
+
128
+ 2022-11-21 07:28:05,423 INFO [decode.py:726] Decoding dev_sdm
129
+ 2022-11-21 07:28:08,299 INFO [decode.py:469] batch 0/?, cuts processed until now is 71
130
+ 2022-11-21 07:28:10,682 INFO [decode.py:469] batch 2/?, cuts processed until now is 535
131
+ 2022-11-21 07:28:13,287 INFO [decode.py:469] batch 4/?, cuts processed until now is 686
132
+ 2022-11-21 07:28:15,897 INFO [decode.py:469] batch 6/?, cuts processed until now is 819
133
+ 2022-11-21 07:28:18,416 INFO [decode.py:469] batch 8/?, cuts processed until now is 980
134
+ 2022-11-21 07:28:22,683 INFO [decode.py:469] batch 10/?, cuts processed until now is 1083
135
+ 2022-11-21 07:28:25,237 INFO [decode.py:469] batch 12/?, cuts processed until now is 1257
136
+ 2022-11-21 07:28:27,455 INFO [decode.py:469] batch 14/?, cuts processed until now is 1513
137
+ 2022-11-21 07:28:29,522 INFO [decode.py:469] batch 16/?, cuts processed until now is 1892
138
+ 2022-11-21 07:28:29,759 INFO [zipformer.py:1414] attn_weights_entropy = tensor([3.2296, 5.3755, 3.8462, 5.1250, 4.2310, 3.9708, 3.6648, 4.8498],
139
+ device='cuda:0'), covar=tensor([0.1051, 0.0208, 0.0721, 0.0170, 0.0474, 0.0684, 0.1304, 0.0173],
140
+ device='cuda:0'), in_proj_covar=tensor([0.0148, 0.0117, 0.0145, 0.0122, 0.0157, 0.0156, 0.0153, 0.0133],
141
+ device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0003],
142
+ device='cuda:0')
143
+ 2022-11-21 07:28:32,783 INFO [decode.py:469] batch 18/?, cuts processed until now is 2020
144
+ 2022-11-21 07:28:36,128 INFO [decode.py:469] batch 20/?, cuts processed until now is 2106
145
+ 2022-11-21 07:28:38,309 INFO [decode.py:469] batch 22/?, cuts processed until now is 2362
146
+ 2022-11-21 07:28:40,406 INFO [decode.py:469] batch 24/?, cuts processed until now is 2807
147
+ 2022-11-21 07:28:43,055 INFO [decode.py:469] batch 26/?, cuts processed until now is 2952
148
+ 2022-11-21 07:28:45,585 INFO [decode.py:469] batch 28/?, cuts processed until now is 3226
149
+ 2022-11-21 07:28:48,094 INFO [decode.py:469] batch 30/?, cuts processed until now is 3381
150
+ 2022-11-21 07:28:51,280 INFO [decode.py:469] batch 32/?, cuts processed until now is 3499
151
+ 2022-11-21 07:28:53,904 INFO [decode.py:469] batch 34/?, cuts processed until now is 3673
152
+ 2022-11-21 07:28:56,660 INFO [decode.py:469] batch 36/?, cuts processed until now is 3797
153
+ 2022-11-21 07:28:59,290 INFO [decode.py:469] batch 38/?, cuts processed until now is 3948
154
+ 2022-11-21 07:29:01,372 INFO [decode.py:469] batch 40/?, cuts processed until now is 4722
155
+ 2022-11-21 07:29:03,719 INFO [decode.py:469] batch 42/?, cuts processed until now is 5007
156
+ 2022-11-21 07:29:05,197 INFO [zipformer.py:1414] attn_weights_entropy = tensor([3.0721, 5.2833, 3.7184, 4.9679, 4.1797, 3.8883, 3.4355, 4.6723],
157
+ device='cuda:0'), covar=tensor([0.1258, 0.0157, 0.0812, 0.0174, 0.0420, 0.0680, 0.1516, 0.0198],
158
+ device='cuda:0'), in_proj_covar=tensor([0.0148, 0.0117, 0.0145, 0.0122, 0.0157, 0.0156, 0.0153, 0.0133],
159
+ device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0003],
160
+ device='cuda:0')
161
+ 2022-11-21 07:29:06,849 INFO [decode.py:469] batch 44/?, cuts processed until now is 5112
162
+ 2022-11-21 07:29:10,007 INFO [decode.py:469] batch 46/?, cuts processed until now is 5219
163
+ 2022-11-21 07:29:13,149 INFO [decode.py:469] batch 48/?, cuts processed until now is 5636
164
+ 2022-11-21 07:29:15,458 INFO [decode.py:469] batch 50/?, cuts processed until now is 5842
165
+ 2022-11-21 07:29:17,633 INFO [decode.py:469] batch 52/?, cuts processed until now is 6222
166
+ 2022-11-21 07:29:19,688 INFO [decode.py:469] batch 54/?, cuts processed until now is 6766
167
+ 2022-11-21 07:29:22,232 INFO [decode.py:469] batch 56/?, cuts processed until now is 7072
168
+ 2022-11-21 07:29:24,213 INFO [decode.py:469] batch 58/?, cuts processed until now is 7518
169
+ 2022-11-21 07:29:26,268 INFO [decode.py:469] batch 60/?, cuts processed until now is 8027
170
+ 2022-11-21 07:29:28,302 INFO [decode.py:469] batch 62/?, cuts processed until now is 8571
171
+ 2022-11-21 07:29:30,453 INFO [decode.py:469] batch 64/?, cuts processed until now is 9115
172
+ 2022-11-21 07:29:32,592 INFO [decode.py:469] batch 66/?, cuts processed until now is 9395
173
+ 2022-11-21 07:29:34,615 INFO [decode.py:469] batch 68/?, cuts processed until now is 9904
174
+ 2022-11-21 07:29:36,669 INFO [decode.py:469] batch 70/?, cuts processed until now is 10413
175
+ 2022-11-21 07:29:38,819 INFO [decode.py:469] batch 72/?, cuts processed until now is 11190
176
+ 2022-11-21 07:29:40,798 INFO [decode.py:469] batch 74/?, cuts processed until now is 11589
177
+ 2022-11-21 07:29:42,439 INFO [decode.py:469] batch 76/?, cuts processed until now is 11699
178
+ 2022-11-21 07:29:44,155 INFO [decode.py:469] batch 78/?, cuts processed until now is 11799
179
+ 2022-11-21 07:29:45,602 INFO [decode.py:469] batch 80/?, cuts processed until now is 11889
180
+ 2022-11-21 07:29:47,113 INFO [decode.py:469] batch 82/?, cuts processed until now is 12461
181
+ 2022-11-21 07:29:48,833 INFO [decode.py:469] batch 84/?, cuts processed until now is 12568
182
+ 2022-11-21 07:29:53,234 INFO [decode.py:469] batch 86/?, cuts processed until now is 12601
183
+ 2022-11-21 07:29:53,462 INFO [decode.py:485] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-dev_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
184
+ 2022-11-21 07:29:53,616 INFO [utils.py:530] [dev_sdm-beam_4_max_contexts_4_max_states_8] %WER 31.11% [29537 / 94940, 4266 ins, 7752 del, 17519 sub ]
185
+ 2022-11-21 07:29:54,425 INFO [utils.py:530] [dev_sdm-beam_4_max_contexts_4_max_states_8] %WER 22.60% [83608 / 369873, 18843 ins, 33372 del, 31393 sub ]
186
+ 2022-11-21 07:29:55,361 INFO [decode.py:511] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-dev_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
187
+ 2022-11-21 07:29:55,362 INFO [decode.py:531]
188
+ For dev_sdm, WER/CER of different settings are:
189
+ beam_4_max_contexts_4_max_states_8 31.11 22.6 best for dev_sdm
190
+
191
+ 2022-11-21 07:29:55,365 INFO [decode.py:726] Decoding test_sdm
192
+ 2022-11-21 07:29:58,300 INFO [decode.py:469] batch 0/?, cuts processed until now is 69
193
+ 2022-11-21 07:30:00,729 INFO [decode.py:469] batch 2/?, cuts processed until now is 555
194
+ 2022-11-21 07:30:03,378 INFO [decode.py:469] batch 4/?, cuts processed until now is 703
195
+ 2022-11-21 07:30:06,055 INFO [decode.py:469] batch 6/?, cuts processed until now is 831
196
+ 2022-11-21 07:30:08,668 INFO [decode.py:469] batch 8/?, cuts processed until now is 988
197
+ 2022-11-21 07:30:12,581 INFO [decode.py:469] batch 10/?, cuts processed until now is 1096
198
+ 2022-11-21 07:30:15,300 INFO [decode.py:469] batch 12/?, cuts processed until now is 1268
199
+ 2022-11-21 07:30:17,517 INFO [decode.py:469] batch 14/?, cuts processed until now is 1533
200
+ 2022-11-21 07:30:19,563 INFO [decode.py:469] batch 16/?, cuts processed until now is 1932
201
+ 2022-11-21 07:30:23,125 INFO [decode.py:469] batch 18/?, cuts processed until now is 2057
202
+ 2022-11-21 07:30:27,346 INFO [decode.py:469] batch 20/?, cuts processed until now is 2126
203
+ 2022-11-21 07:30:29,550 INFO [decode.py:469] batch 22/?, cuts processed until now is 2390
204
+ 2022-11-21 07:30:31,498 INFO [decode.py:469] batch 24/?, cuts processed until now is 2858
205
+ 2022-11-21 07:30:34,241 INFO [decode.py:469] batch 26/?, cuts processed until now is 2998
206
+ 2022-11-21 07:30:36,726 INFO [decode.py:469] batch 28/?, cuts processed until now is 3280
207
+ 2022-11-21 07:30:39,207 INFO [decode.py:469] batch 30/?, cuts processed until now is 3432
208
+ 2022-11-21 07:30:42,945 INFO [decode.py:469] batch 32/?, cuts processed until now is 3537
209
+ 2022-11-21 07:30:45,682 INFO [decode.py:469] batch 34/?, cuts processed until now is 3709
210
+ 2022-11-21 07:30:48,451 INFO [decode.py:469] batch 36/?, cuts processed until now is 3825
211
+ 2022-11-21 07:30:50,144 INFO [zipformer.py:1414] attn_weights_entropy = tensor([2.6736, 4.3904, 3.2157, 2.0058, 3.9507, 1.8350, 3.8480, 2.5184],
212
+ device='cuda:0'), covar=tensor([0.1310, 0.0095, 0.0785, 0.2064, 0.0167, 0.1716, 0.0194, 0.1407],
213
+ device='cuda:0'), in_proj_covar=tensor([0.0112, 0.0093, 0.0102, 0.0107, 0.0090, 0.0114, 0.0086, 0.0106],
214
+ device='cuda:0'), out_proj_covar=tensor([0.0005, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004],
215
+ device='cuda:0')
216
+ 2022-11-21 07:30:51,093 INFO [decode.py:469] batch 38/?, cuts processed until now is 3972
217
+ 2022-11-21 07:30:53,910 INFO [decode.py:469] batch 40/?, cuts processed until now is 4410
218
+ 2022-11-21 07:30:56,064 INFO [decode.py:469] batch 42/?, cuts processed until now is 5060
219
+ 2022-11-21 07:30:58,290 INFO [decode.py:469] batch 44/?, cuts processed until now is 5546
220
+ 2022-11-21 07:31:00,886 INFO [decode.py:469] batch 46/?, cuts processed until now is 5687
221
+ 2022-11-21 07:31:03,150 INFO [decode.py:469] batch 48/?, cuts processed until now is 5893
222
+ 2022-11-21 07:31:05,780 INFO [decode.py:469] batch 50/?, cuts processed until now is 6379
223
+ 2022-11-21 07:31:07,885 INFO [decode.py:469] batch 52/?, cuts processed until now is 6713
224
+ 2022-11-21 07:31:09,866 INFO [decode.py:469] batch 54/?, cuts processed until now is 7112
225
+ 2022-11-21 07:31:13,533 INFO [decode.py:469] batch 56/?, cuts processed until now is 7298
226
+ 2022-11-21 07:31:15,680 INFO [decode.py:469] batch 58/?, cuts processed until now is 8130
227
+ 2022-11-21 07:31:19,632 INFO [decode.py:469] batch 60/?, cuts processed until now is 8273
228
+ 2022-11-21 07:31:21,656 INFO [decode.py:469] batch 62/?, cuts processed until now is 8813
229
+ 2022-11-21 07:31:23,663 INFO [decode.py:469] batch 64/?, cuts processed until now is 9353
230
+ 2022-11-21 07:31:27,304 INFO [decode.py:469] batch 66/?, cuts processed until now is 9500
231
+ 2022-11-21 07:31:30,529 INFO [decode.py:469] batch 68/?, cuts processed until now is 9944
232
+ 2022-11-21 07:31:32,566 INFO [decode.py:469] batch 70/?, cuts processed until now is 10274
233
+ 2022-11-21 07:31:35,274 INFO [decode.py:469] batch 72/?, cuts processed until now is 10711
234
+ 2022-11-21 07:31:37,422 INFO [decode.py:469] batch 74/?, cuts processed until now is 10820
235
+ 2022-11-21 07:31:38,986 INFO [decode.py:469] batch 76/?, cuts processed until now is 11076
236
+ 2022-11-21 07:31:40,050 INFO [decode.py:469] batch 78/?, cuts processed until now is 11209
237
+ 2022-11-21 07:31:41,518 INFO [decode.py:469] batch 80/?, cuts processed until now is 11651
238
+ 2022-11-21 07:31:43,710 INFO [decode.py:469] batch 82/?, cuts processed until now is 12070
239
+ 2022-11-21 07:31:45,094 INFO [decode.py:485] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-test_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
240
+ 2022-11-21 07:31:45,239 INFO [utils.py:530] [test_sdm-beam_4_max_contexts_4_max_states_8] %WER 32.10% [28784 / 89659, 3596 ins, 8598 del, 16590 sub ]
241
+ 2022-11-21 07:31:46,024 INFO [utils.py:530] [test_sdm-beam_4_max_contexts_4_max_states_8] %WER 23.50% [83238 / 354205, 17319 ins, 35917 del, 30002 sub ]
242
+ 2022-11-21 07:31:46,923 INFO [decode.py:511] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-test_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
243
+ 2022-11-21 07:31:46,924 INFO [decode.py:531]
244
+ For test_sdm, WER/CER of different settings are:
245
+ beam_4_max_contexts_4_max_states_8 32.1 23.5 best for test_sdm
246
+
247
+ 2022-11-21 07:31:46,927 INFO [decode.py:726] Decoding dev_gss
248
+ 2022-11-21 07:31:49,806 INFO [decode.py:469] batch 0/?, cuts processed until now is 71
249
+ 2022-11-21 07:31:52,154 INFO [decode.py:469] batch 2/?, cuts processed until now is 535
250
+ 2022-11-21 07:31:54,746 INFO [decode.py:469] batch 4/?, cuts processed until now is 686
251
+ 2022-11-21 07:31:56,352 INFO [zipformer.py:1414] attn_weights_entropy = tensor([2.9126, 2.7834, 2.6260, 2.9843, 2.3963, 2.7063, 2.7318, 3.2210],
252
+ device='cuda:0'), covar=tensor([0.1129, 0.2337, 0.2865, 0.1633, 0.2293, 0.1222, 0.1967, 0.1578],
253
+ device='cuda:0'), in_proj_covar=tensor([0.0085, 0.0085, 0.0091, 0.0079, 0.0076, 0.0081, 0.0082, 0.0061],
254
+ device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002],
255
+ device='cuda:0')
256
+ 2022-11-21 07:31:57,361 INFO [decode.py:469] batch 6/?, cuts processed until now is 819
257
+ 2022-11-21 07:31:59,997 INFO [decode.py:469] batch 8/?, cuts processed until now is 980
258
+ 2022-11-21 07:32:04,080 INFO [decode.py:469] batch 10/?, cuts processed until now is 1083
259
+ 2022-11-21 07:32:06,614 INFO [decode.py:469] batch 12/?, cuts processed until now is 1257
260
+ 2022-11-21 07:32:08,800 INFO [decode.py:469] batch 14/?, cuts processed until now is 1513
261
+ 2022-11-21 07:32:10,784 INFO [decode.py:469] batch 16/?, cuts processed until now is 1892
262
+ 2022-11-21 07:32:14,062 INFO [decode.py:469] batch 18/?, cuts processed until now is 2020
263
+ 2022-11-21 07:32:17,418 INFO [decode.py:469] batch 20/?, cuts processed until now is 2106
264
+ 2022-11-21 07:32:19,570 INFO [decode.py:469] batch 22/?, cuts processed until now is 2362
265
+ 2022-11-21 07:32:21,526 INFO [decode.py:469] batch 24/?, cuts processed until now is 2807
266
+ 2022-11-21 07:32:24,206 INFO [decode.py:469] batch 26/?, cuts processed until now is 2952
267
+ 2022-11-21 07:32:26,668 INFO [decode.py:469] batch 28/?, cuts processed until now is 3226
268
+ 2022-11-21 07:32:29,184 INFO [decode.py:469] batch 30/?, cuts processed until now is 3381
269
+ 2022-11-21 07:32:32,300 INFO [decode.py:469] batch 32/?, cuts processed until now is 3499
270
+ 2022-11-21 07:32:35,016 INFO [decode.py:469] batch 34/?, cuts processed until now is 3673
271
+ 2022-11-21 07:32:37,774 INFO [decode.py:469] batch 36/?, cuts processed until now is 3797
272
+ 2022-11-21 07:32:40,404 INFO [decode.py:469] batch 38/?, cuts processed until now is 3948
273
+ 2022-11-21 07:32:42,407 INFO [decode.py:469] batch 40/?, cuts processed until now is 4722
274
+ 2022-11-21 07:32:44,821 INFO [decode.py:469] batch 42/?, cuts processed until now is 5007
275
+ 2022-11-21 07:32:44,898 INFO [zipformer.py:1414] attn_weights_entropy = tensor([4.6719, 4.8682, 4.8357, 4.7075, 4.3808, 4.2843, 5.2940, 4.7127],
276
+ device='cuda:0'), covar=tensor([0.0299, 0.0504, 0.0213, 0.1114, 0.0385, 0.0185, 0.0502, 0.0335],
277
+ device='cuda:0'), in_proj_covar=tensor([0.0075, 0.0097, 0.0083, 0.0108, 0.0078, 0.0067, 0.0133, 0.0090],
278
+ device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0003, 0.0002],
279
+ device='cuda:0')
280
+ 2022-11-21 07:32:47,969 INFO [decode.py:469] batch 44/?, cuts processed until now is 5112
281
+ 2022-11-21 07:32:51,111 INFO [decode.py:469] batch 46/?, cuts processed until now is 5219
282
+ 2022-11-21 07:32:54,156 INFO [decode.py:469] batch 48/?, cuts processed until now is 5636
283
+ 2022-11-21 07:32:56,833 INFO [decode.py:469] batch 50/?, cuts processed until now is 5842
284
+ 2022-11-21 07:32:58,983 INFO [decode.py:469] batch 52/?, cuts processed until now is 6222
285
+ 2022-11-21 07:33:01,171 INFO [decode.py:469] batch 54/?, cuts processed until now is 6766
286
+ 2022-11-21 07:33:03,409 INFO [decode.py:469] batch 56/?, cuts processed until now is 7072
287
+ 2022-11-21 07:33:05,639 INFO [decode.py:469] batch 58/?, cuts processed until now is 7518
288
+ 2022-11-21 07:33:07,714 INFO [decode.py:469] batch 60/?, cuts processed until now is 8027
289
+ 2022-11-21 07:33:09,719 INFO [decode.py:469] batch 62/?, cuts processed until now is 8571
290
+ 2022-11-21 07:33:11,794 INFO [decode.py:469] batch 64/?, cuts processed until now is 9115
291
+ 2022-11-21 07:33:14,178 INFO [decode.py:469] batch 66/?, cuts processed until now is 9395
292
+ 2022-11-21 07:33:16,211 INFO [decode.py:469] batch 68/?, cuts processed until now is 9904
293
+ 2022-11-21 07:33:18,244 INFO [decode.py:469] batch 70/?, cuts processed until now is 10413
294
+ 2022-11-21 07:33:20,203 INFO [decode.py:469] batch 72/?, cuts processed until now is 11190
295
+ 2022-11-21 07:33:22,234 INFO [decode.py:469] batch 74/?, cuts processed until now is 11589
296
+ 2022-11-21 07:33:23,888 INFO [decode.py:469] batch 76/?, cuts processed until now is 11699
297
+ 2022-11-21 07:33:25,565 INFO [decode.py:469] batch 78/?, cuts processed until now is 11799
298
+ 2022-11-21 07:33:27,007 INFO [decode.py:469] batch 80/?, cuts processed until now is 11889
299
+ 2022-11-21 07:33:28,516 INFO [decode.py:469] batch 82/?, cuts processed until now is 12461
300
+ 2022-11-21 07:33:30,315 INFO [decode.py:469] batch 84/?, cuts processed until now is 12568
301
+ 2022-11-21 07:33:34,767 INFO [decode.py:469] batch 86/?, cuts processed until now is 12601
302
+ 2022-11-21 07:33:34,997 INFO [decode.py:485] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-dev_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
303
+ 2022-11-21 07:33:35,158 INFO [utils.py:530] [dev_gss-beam_4_max_contexts_4_max_states_8] %WER 22.21% [21087 / 94940, 2793 ins, 4898 del, 13396 sub ]
304
+ 2022-11-21 07:33:36,016 INFO [utils.py:530] [dev_gss-beam_4_max_contexts_4_max_states_8] %WER 14.58% [53945 / 369873, 11680 ins, 21193 del, 21072 sub ]
305
+ 2022-11-21 07:33:36,931 INFO [decode.py:511] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-dev_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
306
+ 2022-11-21 07:33:36,932 INFO [decode.py:531]
307
+ For dev_gss, WER/CER of different settings are:
308
+ beam_4_max_contexts_4_max_states_8 22.21 14.58 best for dev_gss
309
+
310
+ 2022-11-21 07:33:36,937 INFO [decode.py:726] Decoding test_gss
311
+ 2022-11-21 07:33:39,707 INFO [decode.py:469] batch 0/?, cuts processed until now is 69
312
+ 2022-11-21 07:33:42,083 INFO [decode.py:469] batch 2/?, cuts processed until now is 555
313
+ 2022-11-21 07:33:44,793 INFO [decode.py:469] batch 4/?, cuts processed until now is 703
314
+ 2022-11-21 07:33:47,525 INFO [decode.py:469] batch 6/?, cuts processed until now is 831
315
+ 2022-11-21 07:33:50,108 INFO [decode.py:469] batch 8/?, cuts processed until now is 988
316
+ 2022-11-21 07:33:53,919 INFO [decode.py:469] batch 10/?, cuts processed until now is 1096
317
+ 2022-11-21 07:33:56,557 INFO [decode.py:469] batch 12/?, cuts processed until now is 1268
318
+ 2022-11-21 07:33:58,734 INFO [decode.py:469] batch 14/?, cuts processed until now is 1533
319
+ 2022-11-21 07:34:00,776 INFO [decode.py:469] batch 16/?, cuts processed until now is 1932
320
+ 2022-11-21 07:34:04,515 INFO [decode.py:469] batch 18/?, cuts processed until now is 2057
321
+ 2022-11-21 07:34:09,449 INFO [decode.py:469] batch 20/?, cuts processed until now is 2126
322
+ 2022-11-21 07:34:11,973 INFO [decode.py:469] batch 22/?, cuts processed until now is 2390
323
+ 2022-11-21 07:34:14,076 INFO [decode.py:469] batch 24/?, cuts processed until now is 2858
324
+ 2022-11-21 07:34:16,849 INFO [decode.py:469] batch 26/?, cuts processed until now is 2998
325
+ 2022-11-21 07:34:19,366 INFO [decode.py:469] batch 28/?, cuts processed until now is 3280
326
+ 2022-11-21 07:34:21,883 INFO [decode.py:469] batch 30/?, cuts processed until now is 3432
327
+ 2022-11-21 07:34:26,257 INFO [decode.py:469] batch 32/?, cuts processed until now is 3537
328
+ 2022-11-21 07:34:29,018 INFO [decode.py:469] batch 34/?, cuts processed until now is 3709
329
+ 2022-11-21 07:34:31,828 INFO [decode.py:469] batch 36/?, cuts processed until now is 3825
330
+ 2022-11-21 07:34:33,635 INFO [zipformer.py:1414] attn_weights_entropy = tensor([2.7291, 2.7342, 2.5623, 2.6762, 2.4725, 2.3896, 2.7740, 2.9601],
331
+ device='cuda:0'), covar=tensor([0.1757, 0.2501, 0.3308, 0.2930, 0.2414, 0.1908, 0.2092, 0.2814],
332
+ device='cuda:0'), in_proj_covar=tensor([0.0085, 0.0085, 0.0091, 0.0079, 0.0076, 0.0081, 0.0082, 0.0061],
333
+ device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002],
334
+ device='cuda:0')
335
+ 2022-11-21 07:34:34,533 INFO [decode.py:469] batch 38/?, cuts processed until now is 3972
336
+ 2022-11-21 07:34:37,436 INFO [decode.py:469] batch 40/?, cuts processed until now is 4410
337
+ 2022-11-21 07:34:39,586 INFO [decode.py:469] batch 42/?, cuts processed until now is 5060
338
+ 2022-11-21 07:34:41,855 INFO [decode.py:469] batch 44/?, cuts processed until now is 5546
339
+ 2022-11-21 07:34:44,505 INFO [decode.py:469] batch 46/?, cuts processed until now is 5687
340
+ 2022-11-21 07:34:46,834 INFO [decode.py:469] batch 48/?, cuts processed until now is 5893
341
+ 2022-11-21 07:34:49,536 INFO [decode.py:469] batch 50/?, cuts processed until now is 6379
342
+ 2022-11-21 07:34:51,695 INFO [decode.py:469] batch 52/?, cuts processed until now is 6713
343
+ 2022-11-21 07:34:53,757 INFO [decode.py:469] batch 54/?, cuts processed until now is 7112
344
+ 2022-11-21 07:34:57,539 INFO [decode.py:469] batch 56/?, cuts processed until now is 7298
345
+ 2022-11-21 07:34:59,735 INFO [decode.py:469] batch 58/?, cuts processed until now is 8130
346
+ 2022-11-21 07:35:03,764 INFO [decode.py:469] batch 60/?, cuts processed until now is 8273
347
+ 2022-11-21 07:35:05,847 INFO [decode.py:469] batch 62/?, cuts processed until now is 8813
348
+ 2022-11-21 07:35:07,988 INFO [decode.py:469] batch 64/?, cuts processed until now is 9353
349
+ 2022-11-21 07:35:11,702 INFO [decode.py:469] batch 66/?, cuts processed until now is 9500
350
+ 2022-11-21 07:35:14,850 INFO [decode.py:469] batch 68/?, cuts processed until now is 9944
351
+ 2022-11-21 07:35:16,926 INFO [decode.py:469] batch 70/?, cuts processed until now is 10274
352
+ 2022-11-21 07:35:19,851 INFO [decode.py:469] batch 72/?, cuts processed until now is 10711
353
+ 2022-11-21 07:35:22,031 INFO [decode.py:469] batch 74/?, cuts processed until now is 10820
354
+ 2022-11-21 07:35:23,609 INFO [decode.py:469] batch 76/?, cuts processed until now is 11076
355
+ 2022-11-21 07:35:24,687 INFO [decode.py:469] batch 78/?, cuts processed until now is 11209
356
+ 2022-11-21 07:35:26,170 INFO [decode.py:469] batch 80/?, cuts processed until now is 11651
357
+ 2022-11-21 07:35:28,374 INFO [decode.py:469] batch 82/?, cuts processed until now is 12070
358
+ 2022-11-21 07:35:29,767 INFO [decode.py:485] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-test_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
359
+ 2022-11-21 07:35:29,910 INFO [utils.py:530] [test_gss-beam_4_max_contexts_4_max_states_8] %WER 22.83% [20466 / 89659, 2179 ins, 5438 del, 12849 sub ]
360
+ 2022-11-21 07:35:30,564 INFO [utils.py:530] [test_gss-beam_4_max_contexts_4_max_states_8] %WER 15.27% [54095 / 354205, 10381 ins, 23091 del, 20623 sub ]
361
+ 2022-11-21 07:35:31,560 INFO [decode.py:511] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-test_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
362
+ 2022-11-21 07:35:31,561 INFO [decode.py:531]
363
+ For test_gss, WER/CER of different settings are:
364
+ beam_4_max_contexts_4_max_states_8 22.83 15.27 best for test_gss
365
+
366
+ 2022-11-21 07:35:31,565 INFO [decode.py:743] Done!
log/fast_beam_search/log-decode-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8-2022-11-21-09-37-32 ADDED
@@ -0,0 +1,376 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-11-21 09:37:32,455 INFO [decode.py:574] Decoding started
2
+ 2022-11-21 09:37:32,456 INFO [decode.py:580] Device: cuda:0
3
+ 2022-11-21 09:37:32,460 INFO [decode.py:590] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 100, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.21', 'k2-build-type': 'Debug', 'k2-with-cuda': True, 'k2-git-sha1': 'f271e82ef30f75fecbae44b163e1244e53def116', 'k2-git-date': 'Fri Oct 28 05:02:16 2022', 'lhotse-version': '1.9.0.dev+git.97bf4b0.dirty', 'torch-version': '1.10.0+cu111', 'torch-cuda-available': True, 'torch-cuda-version': '11.1', 'python-version': '3.8', 'icefall-git-branch': 'ami_recipe', 'icefall-git-sha1': 'd1b5a16-dirty', 'icefall-git-date': 'Sun Nov 20 22:32:57 2022', 'icefall-path': '/exp/draj/mini_scale_2022/icefall', 'k2-path': '/exp/draj/mini_scale_2022/k2/k2/python/k2/__init__.py', 'lhotse-path': '/exp/draj/mini_scale_2022/lhotse/lhotse/__init__.py', 'hostname': 'r2n06', 'IP address': '10.1.2.6'}, 'epoch': 14, 'iter': 0, 'avg': 8, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7/exp/v2'), 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 4, 'ngram_lm_scale': 0.01, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'manifest_dir': PosixPath('data/manifests'), 'enable_musan': True, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'max_duration': 500, 'max_cuts': None, 'num_buckets': 50, 'on_the_fly_feats': False, 'shuffle': True, 'num_workers': 8, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'ihm_only': False, 'res_dir': PosixPath('pruned_transducer_stateless7/exp/v2/fast_beam_search'), 'suffix': 'epoch-14-avg-8-beam-4-max-contexts-4-max-states-8', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
4
+ 2022-11-21 09:37:32,460 INFO [decode.py:592] About to create model
5
+ 2022-11-21 09:37:32,937 INFO [zipformer.py:179] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
6
+ 2022-11-21 09:37:32,947 INFO [decode.py:659] Calculating the averaged model over epoch range from 6 (excluded) to 14
7
+ 2022-11-21 09:37:39,406 INFO [decode.py:694] Number of model parameters: 70369391
8
+ 2022-11-21 09:37:39,407 INFO [asr_datamodule.py:392] About to get AMI IHM dev cuts
9
+ 2022-11-21 09:37:39,409 INFO [asr_datamodule.py:413] About to get AMI IHM test cuts
10
+ 2022-11-21 09:37:39,411 INFO [asr_datamodule.py:398] About to get AMI SDM dev cuts
11
+ 2022-11-21 09:37:39,412 INFO [asr_datamodule.py:419] About to get AMI SDM test cuts
12
+ 2022-11-21 09:37:39,414 INFO [asr_datamodule.py:407] About to get AMI GSS-enhanced dev cuts
13
+ 2022-11-21 09:37:39,415 INFO [asr_datamodule.py:428] About to get AMI GSS-enhanced test cuts
14
+ 2022-11-21 09:37:41,443 INFO [decode.py:726] Decoding dev_ihm
15
+ 2022-11-21 09:37:44,362 INFO [decode.py:469] batch 0/?, cuts processed until now is 72
16
+ 2022-11-21 09:37:47,217 INFO [decode.py:469] batch 2/?, cuts processed until now is 537
17
+ 2022-11-21 09:37:50,037 INFO [decode.py:469] batch 4/?, cuts processed until now is 689
18
+ 2022-11-21 09:37:52,786 INFO [decode.py:469] batch 6/?, cuts processed until now is 823
19
+ 2022-11-21 09:37:55,427 INFO [decode.py:469] batch 8/?, cuts processed until now is 985
20
+ 2022-11-21 09:38:00,347 INFO [decode.py:469] batch 10/?, cuts processed until now is 1088
21
+ 2022-11-21 09:38:03,097 INFO [decode.py:469] batch 12/?, cuts processed until now is 1263
22
+ 2022-11-21 09:38:05,684 INFO [decode.py:469] batch 14/?, cuts processed until now is 1521
23
+ 2022-11-21 09:38:07,949 INFO [decode.py:469] batch 16/?, cuts processed until now is 1903
24
+ 2022-11-21 09:38:11,683 INFO [decode.py:469] batch 18/?, cuts processed until now is 2032
25
+ 2022-11-21 09:38:15,284 INFO [decode.py:469] batch 20/?, cuts processed until now is 2117
26
+ 2022-11-21 09:38:17,471 INFO [decode.py:469] batch 22/?, cuts processed until now is 2375
27
+ 2022-11-21 09:38:19,918 INFO [decode.py:469] batch 24/?, cuts processed until now is 2824
28
+ 2022-11-21 09:38:22,837 INFO [decode.py:469] batch 26/?, cuts processed until now is 2969
29
+ 2022-11-21 09:38:25,785 INFO [decode.py:469] batch 28/?, cuts processed until now is 3245
30
+ 2022-11-21 09:38:26,044 INFO [zipformer.py:1414] attn_weights_entropy = tensor([2.5012, 1.5040, 2.2027, 1.7505, 1.7409, 2.1940, 1.7005, 1.7461],
31
+ device='cuda:0'), covar=tensor([0.0017, 0.0101, 0.0055, 0.0066, 0.0099, 0.0065, 0.0039, 0.0053],
32
+ device='cuda:0'), in_proj_covar=tensor([0.0018, 0.0018, 0.0018, 0.0025, 0.0021, 0.0019, 0.0024, 0.0024],
33
+ device='cuda:0'), out_proj_covar=tensor([1.6600e-05, 1.6809e-05, 1.6082e-05, 2.4018e-05, 1.9485e-05, 1.8494e-05,
34
+ 2.3316e-05, 2.3270e-05], device='cuda:0')
35
+ 2022-11-21 09:38:28,800 INFO [decode.py:469] batch 30/?, cuts processed until now is 3401
36
+ 2022-11-21 09:38:32,004 INFO [decode.py:469] batch 32/?, cuts processed until now is 3519
37
+ 2022-11-21 09:38:34,757 INFO [decode.py:469] batch 34/?, cuts processed until now is 3694
38
+ 2022-11-21 09:38:37,495 INFO [decode.py:469] batch 36/?, cuts processed until now is 3818
39
+ 2022-11-21 09:38:40,528 INFO [decode.py:469] batch 38/?, cuts processed until now is 3970
40
+ 2022-11-21 09:38:42,582 INFO [decode.py:469] batch 40/?, cuts processed until now is 4750
41
+ 2022-11-21 09:38:45,536 INFO [decode.py:469] batch 42/?, cuts processed until now is 5038
42
+ 2022-11-21 09:38:49,167 INFO [decode.py:469] batch 44/?, cuts processed until now is 5144
43
+ 2022-11-21 09:38:52,673 INFO [decode.py:469] batch 46/?, cuts processed until now is 5253
44
+ 2022-11-21 09:38:55,694 INFO [decode.py:469] batch 48/?, cuts processed until now is 5672
45
+ 2022-11-21 09:38:58,783 INFO [decode.py:469] batch 50/?, cuts processed until now is 5878
46
+ 2022-11-21 09:39:01,101 INFO [decode.py:469] batch 52/?, cuts processed until now is 6260
47
+ 2022-11-21 09:39:03,236 INFO [decode.py:469] batch 54/?, cuts processed until now is 6808
48
+ 2022-11-21 09:39:05,698 INFO [decode.py:469] batch 56/?, cuts processed until now is 7117
49
+ 2022-11-21 09:39:08,059 INFO [decode.py:469] batch 58/?, cuts processed until now is 7565
50
+ 2022-11-21 09:39:10,098 INFO [decode.py:469] batch 60/?, cuts processed until now is 8078
51
+ 2022-11-21 09:39:12,152 INFO [decode.py:469] batch 62/?, cuts processed until now is 8626
52
+ 2022-11-21 09:39:14,330 INFO [decode.py:469] batch 64/?, cuts processed until now is 9174
53
+ 2022-11-21 09:39:16,736 INFO [decode.py:469] batch 66/?, cuts processed until now is 9455
54
+ 2022-11-21 09:39:18,781 INFO [decode.py:469] batch 68/?, cuts processed until now is 9968
55
+ 2022-11-21 09:39:20,819 INFO [decode.py:469] batch 70/?, cuts processed until now is 10481
56
+ 2022-11-21 09:39:22,914 INFO [decode.py:469] batch 72/?, cuts processed until now is 11264
57
+ 2022-11-21 09:39:25,374 INFO [decode.py:469] batch 74/?, cuts processed until now is 11669
58
+ 2022-11-21 09:39:27,175 INFO [decode.py:469] batch 76/?, cuts processed until now is 11761
59
+ 2022-11-21 09:39:28,833 INFO [decode.py:469] batch 78/?, cuts processed until now is 11843
60
+ 2022-11-21 09:39:30,414 INFO [decode.py:469] batch 80/?, cuts processed until now is 11956
61
+ 2022-11-21 09:39:31,821 INFO [decode.py:469] batch 82/?, cuts processed until now is 12467
62
+ 2022-11-21 09:39:35,807 INFO [decode.py:469] batch 84/?, cuts processed until now is 12586
63
+ 2022-11-21 09:39:37,712 INFO [decode.py:485] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-dev_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
64
+ 2022-11-21 09:39:37,868 INFO [utils.py:530] [dev_ihm-beam_4_max_contexts_4_max_states_8] %WER 19.44% [18459 / 94940, 2783 ins, 3992 del, 11684 sub ]
65
+ 2022-11-21 09:39:38,597 INFO [utils.py:530] [dev_ihm-beam_4_max_contexts_4_max_states_8] %WER 12.30% [45497 / 369873, 10772 ins, 17562 del, 17163 sub ]
66
+ 2022-11-21 09:39:39,509 INFO [decode.py:511] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-dev_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
67
+ 2022-11-21 09:39:39,510 INFO [decode.py:531]
68
+ For dev_ihm, WER/CER of different settings are:
69
+ beam_4_max_contexts_4_max_states_8 19.44 12.3 best for dev_ihm
70
+
71
+ 2022-11-21 09:39:39,514 INFO [decode.py:726] Decoding test_ihm
72
+ 2022-11-21 09:39:42,413 INFO [decode.py:469] batch 0/?, cuts processed until now is 69
73
+ 2022-11-21 09:39:45,118 INFO [decode.py:469] batch 2/?, cuts processed until now is 555
74
+ 2022-11-21 09:39:48,016 INFO [decode.py:469] batch 4/?, cuts processed until now is 703
75
+ 2022-11-21 09:39:50,763 INFO [decode.py:469] batch 6/?, cuts processed until now is 830
76
+ 2022-11-21 09:39:53,368 INFO [decode.py:469] batch 8/?, cuts processed until now is 987
77
+ 2022-11-21 09:39:57,149 INFO [decode.py:469] batch 10/?, cuts processed until now is 1095
78
+ 2022-11-21 09:39:59,903 INFO [decode.py:469] batch 12/?, cuts processed until now is 1267
79
+ 2022-11-21 09:40:02,382 INFO [decode.py:469] batch 14/?, cuts processed until now is 1532
80
+ 2022-11-21 09:40:04,529 INFO [decode.py:469] batch 16/?, cuts processed until now is 1931
81
+ 2022-11-21 09:40:08,454 INFO [decode.py:469] batch 18/?, cuts processed until now is 2055
82
+ 2022-11-21 09:40:12,895 INFO [decode.py:469] batch 20/?, cuts processed until now is 2124
83
+ 2022-11-21 09:40:15,391 INFO [decode.py:469] batch 22/?, cuts processed until now is 2388
84
+ 2022-11-21 09:40:17,552 INFO [decode.py:469] batch 24/?, cuts processed until now is 2856
85
+ 2022-11-21 09:40:20,452 INFO [decode.py:469] batch 26/?, cuts processed until now is 2996
86
+ 2022-11-21 09:40:23,133 INFO [decode.py:469] batch 28/?, cuts processed until now is 3278
87
+ 2022-11-21 09:40:25,995 INFO [decode.py:469] batch 30/?, cuts processed until now is 3430
88
+ 2022-11-21 09:40:29,911 INFO [decode.py:469] batch 32/?, cuts processed until now is 3535
89
+ 2022-11-21 09:40:32,822 INFO [decode.py:469] batch 34/?, cuts processed until now is 3706
90
+ 2022-11-21 09:40:35,648 INFO [decode.py:469] batch 36/?, cuts processed until now is 3822
91
+ 2022-11-21 09:40:38,688 INFO [decode.py:469] batch 38/?, cuts processed until now is 3969
92
+ 2022-11-21 09:40:41,769 INFO [decode.py:469] batch 40/?, cuts processed until now is 4411
93
+ 2022-11-21 09:40:43,913 INFO [decode.py:469] batch 42/?, cuts processed until now is 5058
94
+ 2022-11-21 09:40:46,328 INFO [decode.py:469] batch 44/?, cuts processed until now is 5544
95
+ 2022-11-21 09:40:49,447 INFO [decode.py:469] batch 46/?, cuts processed until now is 5685
96
+ 2022-11-21 09:40:51,873 INFO [decode.py:469] batch 48/?, cuts processed until now is 5890
97
+ 2022-11-21 09:40:54,246 INFO [decode.py:469] batch 50/?, cuts processed until now is 6372
98
+ 2022-11-21 09:40:56,479 INFO [decode.py:469] batch 52/?, cuts processed until now is 6706
99
+ 2022-11-21 09:40:58,067 INFO [zipformer.py:1414] attn_weights_entropy = tensor([2.4471, 3.4262, 3.4324, 2.9745, 1.8228, 3.2833, 2.1406, 3.1996],
100
+ device='cuda:0'), covar=tensor([0.0454, 0.0208, 0.0175, 0.0451, 0.0695, 0.0255, 0.0610, 0.0165],
101
+ device='cuda:0'), in_proj_covar=tensor([0.0176, 0.0146, 0.0155, 0.0177, 0.0173, 0.0154, 0.0169, 0.0152],
102
+ device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002],
103
+ device='cuda:0')
104
+ 2022-11-21 09:40:58,735 INFO [decode.py:469] batch 54/?, cuts processed until now is 7105
105
+ 2022-11-21 09:41:02,571 INFO [decode.py:469] batch 56/?, cuts processed until now is 7290
106
+ 2022-11-21 09:41:04,855 INFO [decode.py:469] batch 58/?, cuts processed until now is 8116
107
+ 2022-11-21 09:41:08,996 INFO [decode.py:469] batch 60/?, cuts processed until now is 8258
108
+ 2022-11-21 09:41:11,225 INFO [decode.py:469] batch 62/?, cuts processed until now is 8794
109
+ 2022-11-21 09:41:13,318 INFO [decode.py:469] batch 64/?, cuts processed until now is 9330
110
+ 2022-11-21 09:41:16,989 INFO [decode.py:469] batch 66/?, cuts processed until now is 9476
111
+ 2022-11-21 09:41:20,066 INFO [decode.py:469] batch 68/?, cuts processed until now is 9921
112
+ 2022-11-21 09:41:22,237 INFO [decode.py:469] batch 70/?, cuts processed until now is 10251
113
+ 2022-11-21 09:41:25,177 INFO [decode.py:469] batch 72/?, cuts processed until now is 10679
114
+ 2022-11-21 09:41:27,448 INFO [decode.py:469] batch 74/?, cuts processed until now is 10794
115
+ 2022-11-21 09:41:29,130 INFO [decode.py:469] batch 76/?, cuts processed until now is 11039
116
+ 2022-11-21 09:41:29,179 INFO [zipformer.py:1414] attn_weights_entropy = tensor([3.8067, 3.6148, 3.9005, 3.9164, 3.5684, 3.7111, 4.2535, 3.9170],
117
+ device='cuda:0'), covar=tensor([0.0271, 0.0907, 0.0247, 0.0870, 0.0499, 0.0188, 0.0468, 0.0355],
118
+ device='cuda:0'), in_proj_covar=tensor([0.0075, 0.0097, 0.0083, 0.0108, 0.0078, 0.0067, 0.0133, 0.0090],
119
+ device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0003, 0.0002],
120
+ device='cuda:0')
121
+ 2022-11-21 09:41:30,374 INFO [decode.py:469] batch 78/?, cuts processed until now is 11155
122
+ 2022-11-21 09:41:31,933 INFO [decode.py:469] batch 80/?, cuts processed until now is 11600
123
+ 2022-11-21 09:41:34,284 INFO [decode.py:469] batch 82/?, cuts processed until now is 12041
124
+ 2022-11-21 09:41:35,705 INFO [decode.py:469] batch 84/?, cuts processed until now is 12110
125
+ 2022-11-21 09:41:35,872 INFO [decode.py:485] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-test_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
126
+ 2022-11-21 09:41:36,025 INFO [utils.py:530] [test_ihm-beam_4_max_contexts_4_max_states_8] %WER 18.04% [16174 / 89659, 1994 ins, 4043 del, 10137 sub ]
127
+ 2022-11-21 09:41:36,695 INFO [utils.py:530] [test_ihm-beam_4_max_contexts_4_max_states_8] %WER 11.30% [40040 / 354205, 8698 ins, 16856 del, 14486 sub ]
128
+ 2022-11-21 09:41:37,616 INFO [decode.py:511] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-test_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
129
+ 2022-11-21 09:41:37,617 INFO [decode.py:531]
130
+ For test_ihm, WER/CER of different settings are:
131
+ beam_4_max_contexts_4_max_states_8 18.04 11.3 best for test_ihm
132
+
133
+ 2022-11-21 09:41:37,625 INFO [decode.py:726] Decoding dev_sdm
134
+ 2022-11-21 09:41:39,203 INFO [zipformer.py:1414] attn_weights_entropy = tensor([4.2901, 4.0348, 3.8759, 3.7100, 4.2236, 3.8320, 1.8017, 4.4545],
135
+ device='cuda:0'), covar=tensor([0.0167, 0.0258, 0.0231, 0.0258, 0.0227, 0.0300, 0.3030, 0.0169],
136
+ device='cuda:0'), in_proj_covar=tensor([0.0093, 0.0074, 0.0074, 0.0066, 0.0090, 0.0076, 0.0123, 0.0097],
137
+ device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002],
138
+ device='cuda:0')
139
+ 2022-11-21 09:41:40,546 INFO [decode.py:469] batch 0/?, cuts processed until now is 71
140
+ 2022-11-21 09:41:43,132 INFO [decode.py:469] batch 2/?, cuts processed until now is 535
141
+ 2022-11-21 09:41:45,936 INFO [decode.py:469] batch 4/?, cuts processed until now is 686
142
+ 2022-11-21 09:41:48,893 INFO [decode.py:469] batch 6/?, cuts processed until now is 819
143
+ 2022-11-21 09:41:51,541 INFO [decode.py:469] batch 8/?, cuts processed until now is 980
144
+ 2022-11-21 09:41:55,839 INFO [decode.py:469] batch 10/?, cuts processed until now is 1083
145
+ 2022-11-21 09:41:58,703 INFO [decode.py:469] batch 12/?, cuts processed until now is 1257
146
+ 2022-11-21 09:42:01,220 INFO [decode.py:469] batch 14/?, cuts processed until now is 1513
147
+ 2022-11-21 09:42:02,514 INFO [zipformer.py:1414] attn_weights_entropy = tensor([3.3930, 3.4249, 3.4852, 3.2506, 3.5529, 3.2495, 1.5440, 3.6162],
148
+ device='cuda:0'), covar=tensor([0.0194, 0.0198, 0.0190, 0.0216, 0.0206, 0.0299, 0.2493, 0.0189],
149
+ device='cuda:0'), in_proj_covar=tensor([0.0093, 0.0074, 0.0074, 0.0066, 0.0090, 0.0076, 0.0123, 0.0097],
150
+ device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002],
151
+ device='cuda:0')
152
+ 2022-11-21 09:42:03,569 INFO [decode.py:469] batch 16/?, cuts processed until now is 1892
153
+ 2022-11-21 09:42:07,050 INFO [decode.py:469] batch 18/?, cuts processed until now is 2020
154
+ 2022-11-21 09:42:10,600 INFO [decode.py:469] batch 20/?, cuts processed until now is 2106
155
+ 2022-11-21 09:42:12,923 INFO [decode.py:469] batch 22/?, cuts processed until now is 2362
156
+ 2022-11-21 09:42:14,974 INFO [decode.py:469] batch 24/?, cuts processed until now is 2807
157
+ 2022-11-21 09:42:17,760 INFO [decode.py:469] batch 26/?, cuts processed until now is 2952
158
+ 2022-11-21 09:42:20,532 INFO [decode.py:469] batch 28/?, cuts processed until now is 3226
159
+ 2022-11-21 09:42:23,370 INFO [decode.py:469] batch 30/?, cuts processed until now is 3381
160
+ 2022-11-21 09:42:26,622 INFO [decode.py:469] batch 32/?, cuts processed until now is 3499
161
+ 2022-11-21 09:42:29,823 INFO [decode.py:469] batch 34/?, cuts processed until now is 3673
162
+ 2022-11-21 09:42:32,820 INFO [decode.py:469] batch 36/?, cuts processed until now is 3797
163
+ 2022-11-21 09:42:35,553 INFO [decode.py:469] batch 38/?, cuts processed until now is 3948
164
+ 2022-11-21 09:42:37,578 INFO [decode.py:469] batch 40/?, cuts processed until now is 4722
165
+ 2022-11-21 09:42:40,099 INFO [decode.py:469] batch 42/?, cuts processed until now is 5007
166
+ 2022-11-21 09:42:43,349 INFO [decode.py:469] batch 44/?, cuts processed until now is 5112
167
+ 2022-11-21 09:42:46,682 INFO [decode.py:469] batch 46/?, cuts processed until now is 5219
168
+ 2022-11-21 09:42:49,708 INFO [decode.py:469] batch 48/?, cuts processed until now is 5636
169
+ 2022-11-21 09:42:52,248 INFO [decode.py:469] batch 50/?, cuts processed until now is 5842
170
+ 2022-11-21 09:42:54,302 INFO [decode.py:469] batch 52/?, cuts processed until now is 6222
171
+ 2022-11-21 09:42:56,475 INFO [decode.py:469] batch 54/?, cuts processed until now is 6766
172
+ 2022-11-21 09:42:58,796 INFO [decode.py:469] batch 56/?, cuts processed until now is 7072
173
+ 2022-11-21 09:43:01,027 INFO [decode.py:469] batch 58/?, cuts processed until now is 7518
174
+ 2022-11-21 09:43:03,490 INFO [decode.py:469] batch 60/?, cuts processed until now is 8027
175
+ 2022-11-21 09:43:05,674 INFO [decode.py:469] batch 62/?, cuts processed until now is 8571
176
+ 2022-11-21 09:43:07,764 INFO [decode.py:469] batch 64/?, cuts processed until now is 9115
177
+ 2022-11-21 09:43:10,242 INFO [decode.py:469] batch 66/?, cuts processed until now is 9395
178
+ 2022-11-21 09:43:12,522 INFO [decode.py:469] batch 68/?, cuts processed until now is 9904
179
+ 2022-11-21 09:43:14,750 INFO [decode.py:469] batch 70/?, cuts processed until now is 10413
180
+ 2022-11-21 09:43:16,714 INFO [decode.py:469] batch 72/?, cuts processed until now is 11190
181
+ 2022-11-21 09:43:18,837 INFO [decode.py:469] batch 74/?, cuts processed until now is 11589
182
+ 2022-11-21 09:43:18,995 INFO [zipformer.py:1414] attn_weights_entropy = tensor([1.7028, 1.3026, 1.3419, 1.0055, 1.3474, 1.4107, 0.7656, 1.2962],
183
+ device='cuda:0'), covar=tensor([0.0270, 0.0298, 0.0359, 0.0717, 0.0552, 0.0613, 0.1124, 0.0303],
184
+ device='cuda:0'), in_proj_covar=tensor([0.0011, 0.0017, 0.0012, 0.0015, 0.0012, 0.0011, 0.0016, 0.0011],
185
+ device='cuda:0'), out_proj_covar=tensor([5.8009e-05, 8.0520e-05, 5.9059e-05, 7.3872e-05, 6.4258e-05, 5.7202e-05,
186
+ 7.6631e-05, 5.7227e-05], device='cuda:0')
187
+ 2022-11-21 09:43:20,529 INFO [decode.py:469] batch 76/?, cuts processed until now is 11699
188
+ 2022-11-21 09:43:22,432 INFO [decode.py:469] batch 78/?, cuts processed until now is 11799
189
+ 2022-11-21 09:43:23,864 INFO [decode.py:469] batch 80/?, cuts processed until now is 11889
190
+ 2022-11-21 09:43:25,424 INFO [decode.py:469] batch 82/?, cuts processed until now is 12461
191
+ 2022-11-21 09:43:27,202 INFO [decode.py:469] batch 84/?, cuts processed until now is 12568
192
+ 2022-11-21 09:43:31,673 INFO [decode.py:469] batch 86/?, cuts processed until now is 12601
193
+ 2022-11-21 09:43:31,844 INFO [decode.py:485] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-dev_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
194
+ 2022-11-21 09:43:32,004 INFO [utils.py:530] [dev_sdm-beam_4_max_contexts_4_max_states_8] %WER 31.11% [29537 / 94940, 4266 ins, 7752 del, 17519 sub ]
195
+ 2022-11-21 09:43:32,829 INFO [utils.py:530] [dev_sdm-beam_4_max_contexts_4_max_states_8] %WER 22.60% [83608 / 369873, 18843 ins, 33372 del, 31393 sub ]
196
+ 2022-11-21 09:43:33,804 INFO [decode.py:511] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-dev_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
197
+ 2022-11-21 09:43:33,805 INFO [decode.py:531]
198
+ For dev_sdm, WER/CER of different settings are:
199
+ beam_4_max_contexts_4_max_states_8 31.11 22.6 best for dev_sdm
200
+
201
+ 2022-11-21 09:43:33,810 INFO [decode.py:726] Decoding test_sdm
202
+ 2022-11-21 09:43:36,347 INFO [decode.py:469] batch 0/?, cuts processed until now is 69
203
+ 2022-11-21 09:43:39,082 INFO [decode.py:469] batch 2/?, cuts processed until now is 555
204
+ 2022-11-21 09:43:42,216 INFO [decode.py:469] batch 4/?, cuts processed until now is 703
205
+ 2022-11-21 09:43:45,306 INFO [decode.py:469] batch 6/?, cuts processed until now is 831
206
+ 2022-11-21 09:43:48,441 INFO [decode.py:469] batch 8/?, cuts processed until now is 988
207
+ 2022-11-21 09:43:52,218 INFO [decode.py:469] batch 10/?, cuts processed until now is 1096
208
+ 2022-11-21 09:43:55,047 INFO [decode.py:469] batch 12/?, cuts processed until now is 1268
209
+ 2022-11-21 09:43:57,332 INFO [decode.py:469] batch 14/?, cuts processed until now is 1533
210
+ 2022-11-21 09:43:59,736 INFO [decode.py:469] batch 16/?, cuts processed until now is 1932
211
+ 2022-11-21 09:44:02,455 INFO [zipformer.py:1414] attn_weights_entropy = tensor([4.5302, 4.5710, 4.4733, 4.5946, 4.3598, 3.8894, 5.0773, 4.4629],
212
+ device='cuda:0'), covar=tensor([0.0235, 0.0436, 0.0137, 0.0967, 0.0232, 0.0157, 0.0389, 0.0248],
213
+ device='cuda:0'), in_proj_covar=tensor([0.0075, 0.0097, 0.0083, 0.0108, 0.0078, 0.0067, 0.0133, 0.0090],
214
+ device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0003, 0.0002],
215
+ device='cuda:0')
216
+ 2022-11-21 09:44:03,741 INFO [decode.py:469] batch 18/?, cuts processed until now is 2057
217
+ 2022-11-21 09:44:08,271 INFO [decode.py:469] batch 20/?, cuts processed until now is 2126
218
+ 2022-11-21 09:44:10,648 INFO [decode.py:469] batch 22/?, cuts processed until now is 2390
219
+ 2022-11-21 09:44:12,965 INFO [decode.py:469] batch 24/?, cuts processed until now is 2858
220
+ 2022-11-21 09:44:16,139 INFO [decode.py:469] batch 26/?, cuts processed until now is 2998
221
+ 2022-11-21 09:44:18,935 INFO [decode.py:469] batch 28/?, cuts processed until now is 3280
222
+ 2022-11-21 09:44:21,695 INFO [decode.py:469] batch 30/?, cuts processed until now is 3432
223
+ 2022-11-21 09:44:25,745 INFO [decode.py:469] batch 32/?, cuts processed until now is 3537
224
+ 2022-11-21 09:44:28,551 INFO [decode.py:469] batch 34/?, cuts processed until now is 3709
225
+ 2022-11-21 09:44:31,631 INFO [decode.py:469] batch 36/?, cuts processed until now is 3825
226
+ 2022-11-21 09:44:34,591 INFO [decode.py:469] batch 38/?, cuts processed until now is 3972
227
+ 2022-11-21 09:44:37,539 INFO [decode.py:469] batch 40/?, cuts processed until now is 4410
228
+ 2022-11-21 09:44:39,990 INFO [decode.py:469] batch 42/?, cuts processed until now is 5060
229
+ 2022-11-21 09:44:42,447 INFO [decode.py:469] batch 44/?, cuts processed until now is 5546
230
+ 2022-11-21 09:44:45,281 INFO [decode.py:469] batch 46/?, cuts processed until now is 5687
231
+ 2022-11-21 09:44:47,720 INFO [decode.py:469] batch 48/?, cuts processed until now is 5893
232
+ 2022-11-21 09:44:50,326 INFO [decode.py:469] batch 50/?, cuts processed until now is 6379
233
+ 2022-11-21 09:44:52,795 INFO [decode.py:469] batch 52/?, cuts processed until now is 6713
234
+ 2022-11-21 09:44:55,058 INFO [decode.py:469] batch 54/?, cuts processed until now is 7112
235
+ 2022-11-21 09:44:59,177 INFO [decode.py:469] batch 56/?, cuts processed until now is 7298
236
+ 2022-11-21 09:45:01,275 INFO [decode.py:469] batch 58/?, cuts processed until now is 8130
237
+ 2022-11-21 09:45:05,730 INFO [decode.py:469] batch 60/?, cuts processed until now is 8273
238
+ 2022-11-21 09:45:07,900 INFO [decode.py:469] batch 62/?, cuts processed until now is 8813
239
+ 2022-11-21 09:45:10,253 INFO [decode.py:469] batch 64/?, cuts processed until now is 9353
240
+ 2022-11-21 09:45:14,003 INFO [decode.py:469] batch 66/?, cuts processed until now is 9500
241
+ 2022-11-21 09:45:17,300 INFO [decode.py:469] batch 68/?, cuts processed until now is 9944
242
+ 2022-11-21 09:45:19,681 INFO [decode.py:469] batch 70/?, cuts processed until now is 10274
243
+ 2022-11-21 09:45:22,865 INFO [decode.py:469] batch 72/?, cuts processed until now is 10711
244
+ 2022-11-21 09:45:25,095 INFO [decode.py:469] batch 74/?, cuts processed until now is 10820
245
+ 2022-11-21 09:45:26,739 INFO [decode.py:469] batch 76/?, cuts processed until now is 11076
246
+ 2022-11-21 09:45:27,880 INFO [decode.py:469] batch 78/?, cuts processed until now is 11209
247
+ 2022-11-21 09:45:29,397 INFO [decode.py:469] batch 80/?, cuts processed until now is 11651
248
+ 2022-11-21 09:45:31,825 INFO [decode.py:469] batch 82/?, cuts processed until now is 12070
249
+ 2022-11-21 09:45:33,279 INFO [decode.py:485] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-test_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
250
+ 2022-11-21 09:45:33,435 INFO [utils.py:530] [test_sdm-beam_4_max_contexts_4_max_states_8] %WER 32.10% [28784 / 89659, 3596 ins, 8598 del, 16590 sub ]
251
+ 2022-11-21 09:45:34,315 INFO [utils.py:530] [test_sdm-beam_4_max_contexts_4_max_states_8] %WER 23.50% [83238 / 354205, 17319 ins, 35917 del, 30002 sub ]
252
+ 2022-11-21 09:45:35,273 INFO [decode.py:511] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-test_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
253
+ 2022-11-21 09:45:35,274 INFO [decode.py:531]
254
+ For test_sdm, WER/CER of different settings are:
255
+ beam_4_max_contexts_4_max_states_8 32.1 23.5 best for test_sdm
256
+
257
+ 2022-11-21 09:45:35,279 INFO [decode.py:726] Decoding dev_gss
258
+ 2022-11-21 09:45:37,914 INFO [decode.py:469] batch 0/?, cuts processed until now is 71
259
+ 2022-11-21 09:45:40,678 INFO [decode.py:469] batch 2/?, cuts processed until now is 535
260
+ 2022-11-21 09:45:43,449 INFO [decode.py:469] batch 4/?, cuts processed until now is 686
261
+ 2022-11-21 09:45:46,327 INFO [decode.py:469] batch 6/?, cuts processed until now is 819
262
+ 2022-11-21 09:45:49,041 INFO [decode.py:469] batch 8/?, cuts processed until now is 980
263
+ 2022-11-21 09:45:53,390 INFO [decode.py:469] batch 10/?, cuts processed until now is 1083
264
+ 2022-11-21 09:45:56,306 INFO [decode.py:469] batch 12/?, cuts processed until now is 1257
265
+ 2022-11-21 09:45:58,535 INFO [decode.py:469] batch 14/?, cuts processed until now is 1513
266
+ 2022-11-21 09:46:00,627 INFO [decode.py:469] batch 16/?, cuts processed until now is 1892
267
+ 2022-11-21 09:46:04,010 INFO [decode.py:469] batch 18/?, cuts processed until now is 2020
268
+ 2022-11-21 09:46:07,474 INFO [decode.py:469] batch 20/?, cuts processed until now is 2106
269
+ 2022-11-21 09:46:09,689 INFO [decode.py:469] batch 22/?, cuts processed until now is 2362
270
+ 2022-11-21 09:46:11,708 INFO [decode.py:469] batch 24/?, cuts processed until now is 2807
271
+ 2022-11-21 09:46:14,435 INFO [decode.py:469] batch 26/?, cuts processed until now is 2952
272
+ 2022-11-21 09:46:16,945 INFO [decode.py:469] batch 28/?, cuts processed until now is 3226
273
+ 2022-11-21 09:46:19,604 INFO [decode.py:469] batch 30/?, cuts processed until now is 3381
274
+ 2022-11-21 09:46:22,979 INFO [decode.py:469] batch 32/?, cuts processed until now is 3499
275
+ 2022-11-21 09:46:26,083 INFO [decode.py:469] batch 34/?, cuts processed until now is 3673
276
+ 2022-11-21 09:46:29,165 INFO [decode.py:469] batch 36/?, cuts processed until now is 3797
277
+ 2022-11-21 09:46:31,904 INFO [decode.py:469] batch 38/?, cuts processed until now is 3948
278
+ 2022-11-21 09:46:33,984 INFO [decode.py:469] batch 40/?, cuts processed until now is 4722
279
+ 2022-11-21 09:46:36,610 INFO [decode.py:469] batch 42/?, cuts processed until now is 5007
280
+ 2022-11-21 09:46:39,891 INFO [decode.py:469] batch 44/?, cuts processed until now is 5112
281
+ 2022-11-21 09:46:43,039 INFO [decode.py:469] batch 46/?, cuts processed until now is 5219
282
+ 2022-11-21 09:46:46,406 INFO [decode.py:469] batch 48/?, cuts processed until now is 5636
283
+ 2022-11-21 09:46:49,197 INFO [decode.py:469] batch 50/?, cuts processed until now is 5842
284
+ 2022-11-21 09:46:51,301 INFO [decode.py:469] batch 52/?, cuts processed until now is 6222
285
+ 2022-11-21 09:46:53,536 INFO [decode.py:469] batch 54/?, cuts processed until now is 6766
286
+ 2022-11-21 09:46:55,864 INFO [decode.py:469] batch 56/?, cuts processed until now is 7072
287
+ 2022-11-21 09:46:58,017 INFO [decode.py:469] batch 58/?, cuts processed until now is 7518
288
+ 2022-11-21 09:47:00,197 INFO [decode.py:469] batch 60/?, cuts processed until now is 8027
289
+ 2022-11-21 09:47:02,408 INFO [decode.py:469] batch 62/?, cuts processed until now is 8571
290
+ 2022-11-21 09:47:04,619 INFO [decode.py:469] batch 64/?, cuts processed until now is 9115
291
+ 2022-11-21 09:47:07,066 INFO [decode.py:469] batch 66/?, cuts processed until now is 9395
292
+ 2022-11-21 09:47:09,317 INFO [decode.py:469] batch 68/?, cuts processed until now is 9904
293
+ 2022-11-21 09:47:11,531 INFO [decode.py:469] batch 70/?, cuts processed until now is 10413
294
+ 2022-11-21 09:47:13,766 INFO [decode.py:469] batch 72/?, cuts processed until now is 11190
295
+ 2022-11-21 09:47:15,970 INFO [decode.py:469] batch 74/?, cuts processed until now is 11589
296
+ 2022-11-21 09:47:17,655 INFO [decode.py:469] batch 76/?, cuts processed until now is 11699
297
+ 2022-11-21 09:47:19,467 INFO [decode.py:469] batch 78/?, cuts processed until now is 11799
298
+ 2022-11-21 09:47:21,008 INFO [decode.py:469] batch 80/?, cuts processed until now is 11889
299
+ 2022-11-21 09:47:22,661 INFO [decode.py:469] batch 82/?, cuts processed until now is 12461
300
+ 2022-11-21 09:47:24,541 INFO [decode.py:469] batch 84/?, cuts processed until now is 12568
301
+ 2022-11-21 09:47:29,330 INFO [decode.py:469] batch 86/?, cuts processed until now is 12601
302
+ 2022-11-21 09:47:29,498 INFO [decode.py:485] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-dev_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
303
+ 2022-11-21 09:47:29,659 INFO [utils.py:530] [dev_gss-beam_4_max_contexts_4_max_states_8] %WER 22.21% [21087 / 94940, 2793 ins, 4898 del, 13396 sub ]
304
+ 2022-11-21 09:47:30,520 INFO [utils.py:530] [dev_gss-beam_4_max_contexts_4_max_states_8] %WER 14.58% [53945 / 369873, 11680 ins, 21193 del, 21072 sub ]
305
+ 2022-11-21 09:47:31,464 INFO [decode.py:511] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-dev_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
306
+ 2022-11-21 09:47:31,465 INFO [decode.py:531]
307
+ For dev_gss, WER/CER of different settings are:
308
+ beam_4_max_contexts_4_max_states_8 22.21 14.58 best for dev_gss
309
+
310
+ 2022-11-21 09:47:31,470 INFO [decode.py:726] Decoding test_gss
311
+ 2022-11-21 09:47:34,030 INFO [decode.py:469] batch 0/?, cuts processed until now is 69
312
+ 2022-11-21 09:47:36,726 INFO [decode.py:469] batch 2/?, cuts processed until now is 555
313
+ 2022-11-21 09:47:39,438 INFO [decode.py:469] batch 4/?, cuts processed until now is 703
314
+ 2022-11-21 09:47:42,207 INFO [decode.py:469] batch 6/?, cuts processed until now is 831
315
+ 2022-11-21 09:47:45,003 INFO [decode.py:469] batch 8/?, cuts processed until now is 988
316
+ 2022-11-21 09:47:48,933 INFO [decode.py:469] batch 10/?, cuts processed until now is 1096
317
+ 2022-11-21 09:47:51,621 INFO [decode.py:469] batch 12/?, cuts processed until now is 1268
318
+ 2022-11-21 09:47:53,933 INFO [decode.py:469] batch 14/?, cuts processed until now is 1533
319
+ 2022-11-21 09:47:55,182 INFO [zipformer.py:1414] attn_weights_entropy = tensor([2.3347, 3.6062, 2.7166, 1.9545, 3.4071, 1.4218, 3.2604, 2.0932],
320
+ device='cuda:0'), covar=tensor([0.1072, 0.0145, 0.0907, 0.1570, 0.0184, 0.1695, 0.0277, 0.1152],
321
+ device='cuda:0'), in_proj_covar=tensor([0.0112, 0.0093, 0.0102, 0.0107, 0.0090, 0.0114, 0.0086, 0.0106],
322
+ device='cuda:0'), out_proj_covar=tensor([0.0005, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004],
323
+ device='cuda:0')
324
+ 2022-11-21 09:47:55,993 INFO [decode.py:469] batch 16/?, cuts processed until now is 1932
325
+ 2022-11-21 09:47:59,662 INFO [decode.py:469] batch 18/?, cuts processed until now is 2057
326
+ 2022-11-21 09:48:04,043 INFO [decode.py:469] batch 20/?, cuts processed until now is 2126
327
+ 2022-11-21 09:48:06,353 INFO [decode.py:469] batch 22/?, cuts processed until now is 2390
328
+ 2022-11-21 09:48:08,351 INFO [decode.py:469] batch 24/?, cuts processed until now is 2858
329
+ 2022-11-21 09:48:11,314 INFO [decode.py:469] batch 26/?, cuts processed until now is 2998
330
+ 2022-11-21 09:48:14,124 INFO [decode.py:469] batch 28/?, cuts processed until now is 3280
331
+ 2022-11-21 09:48:16,777 INFO [decode.py:469] batch 30/?, cuts processed until now is 3432
332
+ 2022-11-21 09:48:20,723 INFO [decode.py:469] batch 32/?, cuts processed until now is 3537
333
+ 2022-11-21 09:48:23,634 INFO [decode.py:469] batch 34/?, cuts processed until now is 3709
334
+ 2022-11-21 09:48:26,540 INFO [decode.py:469] batch 36/?, cuts processed until now is 3825
335
+ 2022-11-21 09:48:29,325 INFO [decode.py:469] batch 38/?, cuts processed until now is 3972
336
+ 2022-11-21 09:48:32,390 INFO [decode.py:469] batch 40/?, cuts processed until now is 4410
337
+ 2022-11-21 09:48:34,687 INFO [decode.py:469] batch 42/?, cuts processed until now is 5060
338
+ 2022-11-21 09:48:37,004 INFO [decode.py:469] batch 44/?, cuts processed until now is 5546
339
+ 2022-11-21 09:48:38,609 INFO [zipformer.py:1414] attn_weights_entropy = tensor([2.7190, 4.4780, 3.4362, 1.9964, 4.0394, 1.9545, 3.7939, 2.6835],
340
+ device='cuda:0'), covar=tensor([0.1413, 0.0111, 0.0604, 0.2627, 0.0186, 0.1757, 0.0252, 0.1546],
341
+ device='cuda:0'), in_proj_covar=tensor([0.0112, 0.0093, 0.0102, 0.0107, 0.0090, 0.0114, 0.0086, 0.0106],
342
+ device='cuda:0'), out_proj_covar=tensor([0.0005, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004],
343
+ device='cuda:0')
344
+ 2022-11-21 09:48:39,665 INFO [decode.py:469] batch 46/?, cuts processed until now is 5687
345
+ 2022-11-21 09:48:42,162 INFO [decode.py:469] batch 48/?, cuts processed until now is 5893
346
+ 2022-11-21 09:48:45,120 INFO [decode.py:469] batch 50/?, cuts processed until now is 6379
347
+ 2022-11-21 09:48:47,277 INFO [decode.py:469] batch 52/?, cuts processed until now is 6713
348
+ 2022-11-21 09:48:49,298 INFO [decode.py:469] batch 54/?, cuts processed until now is 7112
349
+ 2022-11-21 09:48:52,978 INFO [decode.py:469] batch 56/?, cuts processed until now is 7298
350
+ 2022-11-21 09:48:55,141 INFO [decode.py:469] batch 58/?, cuts processed until now is 8130
351
+ 2022-11-21 09:48:59,231 INFO [decode.py:469] batch 60/?, cuts processed until now is 8273
352
+ 2022-11-21 09:49:01,493 INFO [decode.py:469] batch 62/?, cuts processed until now is 8813
353
+ 2022-11-21 09:49:03,797 INFO [decode.py:469] batch 64/?, cuts processed until now is 9353
354
+ 2022-11-21 09:49:07,802 INFO [decode.py:469] batch 66/?, cuts processed until now is 9500
355
+ 2022-11-21 09:49:10,994 INFO [decode.py:469] batch 68/?, cuts processed until now is 9944
356
+ 2022-11-21 09:49:13,266 INFO [decode.py:469] batch 70/?, cuts processed until now is 10274
357
+ 2022-11-21 09:49:16,318 INFO [decode.py:469] batch 72/?, cuts processed until now is 10711
358
+ 2022-11-21 09:49:18,676 INFO [decode.py:469] batch 74/?, cuts processed until now is 10820
359
+ 2022-11-21 09:49:20,277 INFO [decode.py:469] batch 76/?, cuts processed until now is 11076
360
+ 2022-11-21 09:49:21,355 INFO [decode.py:469] batch 78/?, cuts processed until now is 11209
361
+ 2022-11-21 09:49:22,909 INFO [decode.py:469] batch 80/?, cuts processed until now is 11651
362
+ 2022-11-21 09:49:25,098 INFO [decode.py:469] batch 82/?, cuts processed until now is 12070
363
+ 2022-11-21 09:49:25,367 INFO [zipformer.py:1414] attn_weights_entropy = tensor([2.8827, 4.1919, 4.0346, 3.3228, 2.0846, 4.1770, 2.3554, 3.6866],
364
+ device='cuda:0'), covar=tensor([0.0463, 0.0336, 0.0194, 0.0486, 0.0737, 0.0181, 0.0636, 0.0154],
365
+ device='cuda:0'), in_proj_covar=tensor([0.0176, 0.0146, 0.0155, 0.0177, 0.0173, 0.0154, 0.0169, 0.0152],
366
+ device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002],
367
+ device='cuda:0')
368
+ 2022-11-21 09:49:26,415 INFO [decode.py:485] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-test_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
369
+ 2022-11-21 09:49:26,555 INFO [utils.py:530] [test_gss-beam_4_max_contexts_4_max_states_8] %WER 22.83% [20466 / 89659, 2179 ins, 5438 del, 12849 sub ]
370
+ 2022-11-21 09:49:27,307 INFO [utils.py:530] [test_gss-beam_4_max_contexts_4_max_states_8] %WER 15.27% [54095 / 354205, 10381 ins, 23091 del, 20623 sub ]
371
+ 2022-11-21 09:49:28,185 INFO [decode.py:511] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-test_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt
372
+ 2022-11-21 09:49:28,186 INFO [decode.py:531]
373
+ For test_gss, WER/CER of different settings are:
374
+ beam_4_max_contexts_4_max_states_8 22.83 15.27 best for test_gss
375
+
376
+ 2022-11-21 09:49:28,190 INFO [decode.py:743] Done!
log/fast_beam_search/log-decode-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8-2022-11-19-11-56-30 DELETED
@@ -1,381 +0,0 @@
1
- 2022-11-19 11:56:30,475 INFO [decode.py:561] Decoding started
2
- 2022-11-19 11:56:30,476 INFO [decode.py:567] Device: cuda:0
3
- 2022-11-19 11:56:30,484 INFO [decode.py:577] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 100, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.21', 'k2-build-type': 'Debug', 'k2-with-cuda': True, 'k2-git-sha1': 'f271e82ef30f75fecbae44b163e1244e53def116', 'k2-git-date': 'Fri Oct 28 05:02:16 2022', 'lhotse-version': '1.9.0.dev+git.97bf4b0.dirty', 'torch-version': '1.10.0+cu111', 'torch-cuda-available': True, 'torch-cuda-version': '11.1', 'python-version': '3.8', 'icefall-git-branch': 'ami', 'icefall-git-sha1': 'c2c11ca-clean', 'icefall-git-date': 'Sat Nov 19 10:48:59 2022', 'icefall-path': '/exp/draj/mini_scale_2022/icefall', 'k2-path': '/exp/draj/mini_scale_2022/k2/k2/python/k2/__init__.py', 'lhotse-path': '/exp/draj/mini_scale_2022/lhotse/lhotse/__init__.py', 'hostname': 'r7n01', 'IP address': '10.1.7.1'}, 'epoch': 30, 'iter': 105000, 'avg': 10, 'exp_dir': PosixPath('pruned_transducer_stateless7/exp/v2'), 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 4, 'ngram_lm_scale': 0.01, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'manifest_dir': PosixPath('data/manifests'), 'enable_musan': True, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'max_duration': 500, 'max_cuts': None, 'num_buckets': 50, 'on_the_fly_feats': False, 'shuffle': True, 'num_workers': 8, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'ihm_only': False, 'res_dir': PosixPath('pruned_transducer_stateless7/exp/v2/fast_beam_search'), 'suffix': 'iter-105000-avg-10-beam-4-max-contexts-4-max-states-8', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
4
- 2022-11-19 11:56:30,484 INFO [decode.py:579] About to create model
5
- 2022-11-19 11:56:30,977 INFO [zipformer.py:176] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
6
- 2022-11-19 11:56:30,993 INFO [decode.py:595] averaging ['pruned_transducer_stateless7/exp/v2/checkpoint-105000.pt', 'pruned_transducer_stateless7/exp/v2/checkpoint-100000.pt', 'pruned_transducer_stateless7/exp/v2/checkpoint-95000.pt', 'pruned_transducer_stateless7/exp/v2/checkpoint-90000.pt', 'pruned_transducer_stateless7/exp/v2/checkpoint-85000.pt', 'pruned_transducer_stateless7/exp/v2/checkpoint-80000.pt', 'pruned_transducer_stateless7/exp/v2/checkpoint-75000.pt', 'pruned_transducer_stateless7/exp/v2/checkpoint-70000.pt', 'pruned_transducer_stateless7/exp/v2/checkpoint-65000.pt', 'pruned_transducer_stateless7/exp/v2/checkpoint-60000.pt']
7
- 2022-11-19 11:57:47,988 INFO [decode.py:632] Number of model parameters: 70369391
8
- 2022-11-19 11:57:47,989 INFO [asr_datamodule.py:392] About to get AMI IHM dev cuts
9
- 2022-11-19 11:57:48,019 INFO [asr_datamodule.py:413] About to get AMI IHM test cuts
10
- 2022-11-19 11:57:48,021 INFO [asr_datamodule.py:398] About to get AMI SDM dev cuts
11
- 2022-11-19 11:57:48,023 INFO [asr_datamodule.py:419] About to get AMI SDM test cuts
12
- 2022-11-19 11:57:48,025 INFO [asr_datamodule.py:407] About to get AMI GSS-enhanced dev cuts
13
- 2022-11-19 11:57:48,027 INFO [asr_datamodule.py:428] About to get AMI GSS-enhanced test cuts
14
- 2022-11-19 11:57:50,158 INFO [decode.py:664] Decoding dev_ihm
15
- 2022-11-19 11:57:53,749 INFO [decode.py:456] batch 0/?, cuts processed until now is 72
16
- 2022-11-19 11:57:56,656 INFO [decode.py:456] batch 2/?, cuts processed until now is 537
17
- 2022-11-19 11:57:59,459 INFO [decode.py:456] batch 4/?, cuts processed until now is 689
18
- 2022-11-19 11:58:02,426 INFO [decode.py:456] batch 6/?, cuts processed until now is 823
19
- 2022-11-19 11:58:05,194 INFO [decode.py:456] batch 8/?, cuts processed until now is 985
20
- 2022-11-19 11:58:09,722 INFO [decode.py:456] batch 10/?, cuts processed until now is 1088
21
- 2022-11-19 11:58:12,495 INFO [decode.py:456] batch 12/?, cuts processed until now is 1263
22
- 2022-11-19 11:58:14,887 INFO [decode.py:456] batch 14/?, cuts processed until now is 1521
23
- 2022-11-19 11:58:17,038 INFO [decode.py:456] batch 16/?, cuts processed until now is 1903
24
- 2022-11-19 11:58:20,725 INFO [decode.py:456] batch 18/?, cuts processed until now is 2032
25
- 2022-11-19 11:58:24,451 INFO [decode.py:456] batch 20/?, cuts processed until now is 2117
26
- 2022-11-19 11:58:26,837 INFO [decode.py:456] batch 22/?, cuts processed until now is 2375
27
- 2022-11-19 11:58:29,077 INFO [decode.py:456] batch 24/?, cuts processed until now is 2824
28
- 2022-11-19 11:58:32,123 INFO [decode.py:456] batch 26/?, cuts processed until now is 2969
29
- 2022-11-19 11:58:34,816 INFO [decode.py:456] batch 28/?, cuts processed until now is 3245
30
- 2022-11-19 11:58:37,587 INFO [decode.py:456] batch 30/?, cuts processed until now is 3401
31
- 2022-11-19 11:58:41,081 INFO [decode.py:456] batch 32/?, cuts processed until now is 3519
32
- 2022-11-19 11:58:44,442 INFO [decode.py:456] batch 34/?, cuts processed until now is 3694
33
- 2022-11-19 11:58:47,622 INFO [decode.py:456] batch 36/?, cuts processed until now is 3818
34
- 2022-11-19 11:58:50,688 INFO [decode.py:456] batch 38/?, cuts processed until now is 3970
35
- 2022-11-19 11:58:53,074 INFO [decode.py:456] batch 40/?, cuts processed until now is 4750
36
- 2022-11-19 11:58:55,903 INFO [decode.py:456] batch 42/?, cuts processed until now is 5038
37
- 2022-11-19 11:58:59,500 INFO [decode.py:456] batch 44/?, cuts processed until now is 5144
38
- 2022-11-19 11:59:03,071 INFO [decode.py:456] batch 46/?, cuts processed until now is 5253
39
- 2022-11-19 11:59:03,270 INFO [zipformer.py:1411] attn_weights_entropy = tensor([5.1287, 4.7557, 5.0205, 4.9174, 5.1172, 4.5533, 2.8637, 5.2734],
40
- device='cuda:0'), covar=tensor([0.0140, 0.0272, 0.0134, 0.0164, 0.0143, 0.0264, 0.2165, 0.0151],
41
- device='cuda:0'), in_proj_covar=tensor([0.0096, 0.0079, 0.0079, 0.0071, 0.0094, 0.0080, 0.0125, 0.0100],
42
- device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002],
43
- device='cuda:0')
44
- 2022-11-19 11:59:06,333 INFO [decode.py:456] batch 48/?, cuts processed until now is 5672
45
- 2022-11-19 11:59:09,188 INFO [decode.py:456] batch 50/?, cuts processed until now is 5878
46
- 2022-11-19 11:59:11,298 INFO [decode.py:456] batch 52/?, cuts processed until now is 6260
47
- 2022-11-19 11:59:12,535 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.2047, 3.1646, 2.3350, 1.8841, 3.0646, 1.3177, 2.7205, 1.9816],
48
- device='cuda:0'), covar=tensor([0.0669, 0.0176, 0.0795, 0.0913, 0.0189, 0.1283, 0.0412, 0.0708],
49
- device='cuda:0'), in_proj_covar=tensor([0.0113, 0.0095, 0.0106, 0.0107, 0.0092, 0.0114, 0.0091, 0.0105],
50
- device='cuda:0'), out_proj_covar=tensor([0.0005, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004],
51
- device='cuda:0')
52
- 2022-11-19 11:59:13,402 INFO [decode.py:456] batch 54/?, cuts processed until now is 6808
53
- 2022-11-19 11:59:15,807 INFO [decode.py:456] batch 56/?, cuts processed until now is 7117
54
- 2022-11-19 11:59:18,242 INFO [decode.py:456] batch 58/?, cuts processed until now is 7565
55
- 2022-11-19 11:59:20,405 INFO [decode.py:456] batch 60/?, cuts processed until now is 8078
56
- 2022-11-19 11:59:22,576 INFO [decode.py:456] batch 62/?, cuts processed until now is 8626
57
- 2022-11-19 11:59:24,898 INFO [decode.py:456] batch 64/?, cuts processed until now is 9174
58
- 2022-11-19 11:59:28,041 INFO [decode.py:456] batch 66/?, cuts processed until now is 9455
59
- 2022-11-19 11:59:30,361 INFO [decode.py:456] batch 68/?, cuts processed until now is 9968
60
- 2022-11-19 11:59:32,614 INFO [decode.py:456] batch 70/?, cuts processed until now is 10481
61
- 2022-11-19 11:59:34,793 INFO [decode.py:456] batch 72/?, cuts processed until now is 11264
62
- 2022-11-19 11:59:36,997 INFO [decode.py:456] batch 74/?, cuts processed until now is 11669
63
- 2022-11-19 11:59:38,779 INFO [decode.py:456] batch 76/?, cuts processed until now is 11761
64
- 2022-11-19 11:59:40,511 INFO [decode.py:456] batch 78/?, cuts processed until now is 11843
65
- 2022-11-19 11:59:42,240 INFO [decode.py:456] batch 80/?, cuts processed until now is 11956
66
- 2022-11-19 11:59:43,775 INFO [decode.py:456] batch 82/?, cuts processed until now is 12467
67
- 2022-11-19 11:59:48,114 INFO [decode.py:456] batch 84/?, cuts processed until now is 12586
68
- 2022-11-19 11:59:50,291 INFO [decode.py:472] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
69
- 2022-11-19 11:59:50,494 INFO [utils.py:531] [dev_ihm-beam_4_max_contexts_4_max_states_8] %WER 19.46% [18471 / 94940, 2582 ins, 4027 del, 11862 sub ]
70
- 2022-11-19 11:59:51,358 INFO [utils.py:531] [dev_ihm-beam_4_max_contexts_4_max_states_8] %WER 12.39% [45842 / 369873, 10341 ins, 18060 del, 17441 sub ]
71
- 2022-11-19 11:59:52,357 INFO [decode.py:498] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
72
- 2022-11-19 11:59:52,358 INFO [decode.py:518]
73
- For dev_ihm, WER/CER of different settings are:
74
- beam_4_max_contexts_4_max_states_8 19.46 12.39 best for dev_ihm
75
-
76
- 2022-11-19 11:59:52,363 INFO [decode.py:664] Decoding test_ihm
77
- 2022-11-19 11:59:55,729 INFO [decode.py:456] batch 0/?, cuts processed until now is 69
78
- 2022-11-19 11:59:58,708 INFO [decode.py:456] batch 2/?, cuts processed until now is 555
79
- 2022-11-19 12:00:01,681 INFO [decode.py:456] batch 4/?, cuts processed until now is 703
80
- 2022-11-19 12:00:04,573 INFO [decode.py:456] batch 6/?, cuts processed until now is 830
81
- 2022-11-19 12:00:07,344 INFO [decode.py:456] batch 8/?, cuts processed until now is 987
82
- 2022-11-19 12:00:11,335 INFO [decode.py:456] batch 10/?, cuts processed until now is 1095
83
- 2022-11-19 12:00:14,147 INFO [decode.py:456] batch 12/?, cuts processed until now is 1267
84
- 2022-11-19 12:00:16,507 INFO [decode.py:456] batch 14/?, cuts processed until now is 1532
85
- 2022-11-19 12:00:18,652 INFO [decode.py:456] batch 16/?, cuts processed until now is 1931
86
- 2022-11-19 12:00:22,624 INFO [decode.py:456] batch 18/?, cuts processed until now is 2055
87
- 2022-11-19 12:00:27,410 INFO [decode.py:456] batch 20/?, cuts processed until now is 2124
88
- 2022-11-19 12:00:29,878 INFO [decode.py:456] batch 22/?, cuts processed until now is 2388
89
- 2022-11-19 12:00:32,061 INFO [decode.py:456] batch 24/?, cuts processed until now is 2856
90
- 2022-11-19 12:00:35,040 INFO [decode.py:456] batch 26/?, cuts processed until now is 2996
91
- 2022-11-19 12:00:37,754 INFO [decode.py:456] batch 28/?, cuts processed until now is 3278
92
- 2022-11-19 12:00:40,482 INFO [decode.py:456] batch 30/?, cuts processed until now is 3430
93
- 2022-11-19 12:00:44,585 INFO [decode.py:456] batch 32/?, cuts processed until now is 3535
94
- 2022-11-19 12:00:47,584 INFO [decode.py:456] batch 34/?, cuts processed until now is 3706
95
- 2022-11-19 12:00:50,600 INFO [decode.py:456] batch 36/?, cuts processed until now is 3822
96
- 2022-11-19 12:00:53,463 INFO [decode.py:456] batch 38/?, cuts processed until now is 3969
97
- 2022-11-19 12:00:56,851 INFO [decode.py:456] batch 40/?, cuts processed until now is 4411
98
- 2022-11-19 12:00:59,476 INFO [decode.py:456] batch 42/?, cuts processed until now is 5058
99
- 2022-11-19 12:01:01,973 INFO [decode.py:456] batch 44/?, cuts processed until now is 5544
100
- 2022-11-19 12:01:04,877 INFO [decode.py:456] batch 46/?, cuts processed until now is 5685
101
- 2022-11-19 12:01:07,431 INFO [decode.py:456] batch 48/?, cuts processed until now is 5890
102
- 2022-11-19 12:01:10,191 INFO [decode.py:456] batch 50/?, cuts processed until now is 6372
103
- 2022-11-19 12:01:12,556 INFO [decode.py:456] batch 52/?, cuts processed until now is 6706
104
- 2022-11-19 12:01:14,692 INFO [decode.py:456] batch 54/?, cuts processed until now is 7105
105
- 2022-11-19 12:01:18,461 INFO [decode.py:456] batch 56/?, cuts processed until now is 7290
106
- 2022-11-19 12:01:20,652 INFO [decode.py:456] batch 58/?, cuts processed until now is 8116
107
- 2022-11-19 12:01:24,751 INFO [decode.py:456] batch 60/?, cuts processed until now is 8258
108
- 2022-11-19 12:01:26,847 INFO [decode.py:456] batch 62/?, cuts processed until now is 8794
109
- 2022-11-19 12:01:29,089 INFO [decode.py:456] batch 64/?, cuts processed until now is 9330
110
- 2022-11-19 12:01:32,761 INFO [decode.py:456] batch 66/?, cuts processed until now is 9476
111
- 2022-11-19 12:01:35,866 INFO [decode.py:456] batch 68/?, cuts processed until now is 9921
112
- 2022-11-19 12:01:37,922 INFO [decode.py:456] batch 70/?, cuts processed until now is 10251
113
- 2022-11-19 12:01:41,636 INFO [decode.py:456] batch 72/?, cuts processed until now is 10679
114
- 2022-11-19 12:01:44,398 INFO [decode.py:456] batch 74/?, cuts processed until now is 10794
115
- 2022-11-19 12:01:46,138 INFO [decode.py:456] batch 76/?, cuts processed until now is 11039
116
- 2022-11-19 12:01:47,209 INFO [decode.py:456] batch 78/?, cuts processed until now is 11155
117
- 2022-11-19 12:01:48,861 INFO [decode.py:456] batch 80/?, cuts processed until now is 11600
118
- 2022-11-19 12:01:51,326 INFO [decode.py:456] batch 82/?, cuts processed until now is 12041
119
- 2022-11-19 12:01:52,764 INFO [decode.py:456] batch 84/?, cuts processed until now is 12110
120
- 2022-11-19 12:01:53,020 INFO [decode.py:472] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
121
- 2022-11-19 12:01:53,199 INFO [utils.py:531] [test_ihm-beam_4_max_contexts_4_max_states_8] %WER 18.35% [16449 / 89659, 1901 ins, 4044 del, 10504 sub ]
122
- 2022-11-19 12:01:53,873 INFO [utils.py:531] [test_ihm-beam_4_max_contexts_4_max_states_8] %WER 11.50% [40727 / 354205, 8552 ins, 17081 del, 15094 sub ]
123
- 2022-11-19 12:01:54,902 INFO [decode.py:498] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
124
- 2022-11-19 12:01:54,903 INFO [decode.py:518]
125
- For test_ihm, WER/CER of different settings are:
126
- beam_4_max_contexts_4_max_states_8 18.35 11.5 best for test_ihm
127
-
128
- 2022-11-19 12:01:54,908 INFO [decode.py:664] Decoding dev_sdm
129
- 2022-11-19 12:01:58,144 INFO [decode.py:456] batch 0/?, cuts processed until now is 71
130
- 2022-11-19 12:02:00,702 INFO [decode.py:456] batch 2/?, cuts processed until now is 535
131
- 2022-11-19 12:02:03,476 INFO [decode.py:456] batch 4/?, cuts processed until now is 686
132
- 2022-11-19 12:02:06,323 INFO [decode.py:456] batch 6/?, cuts processed until now is 819
133
- 2022-11-19 12:02:07,881 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.0469, 1.5212, 1.9862, 1.1136, 1.9181, 1.8707, 1.4307, 1.7934],
134
- device='cuda:0'), covar=tensor([0.0691, 0.0744, 0.0420, 0.1360, 0.1466, 0.0986, 0.1064, 0.0307],
135
- device='cuda:0'), in_proj_covar=tensor([0.0013, 0.0021, 0.0014, 0.0018, 0.0014, 0.0013, 0.0019, 0.0013],
136
- device='cuda:0'), out_proj_covar=tensor([7.0063e-05, 9.8332e-05, 7.2372e-05, 8.8655e-05, 7.6633e-05, 6.9844e-05,
137
- 9.3643e-05, 6.9182e-05], device='cuda:0')
138
- 2022-11-19 12:02:09,041 INFO [decode.py:456] batch 8/?, cuts processed until now is 980
139
- 2022-11-19 12:02:13,345 INFO [decode.py:456] batch 10/?, cuts processed until now is 1083
140
- 2022-11-19 12:02:16,238 INFO [decode.py:456] batch 12/?, cuts processed until now is 1257
141
- 2022-11-19 12:02:18,758 INFO [decode.py:456] batch 14/?, cuts processed until now is 1513
142
- 2022-11-19 12:02:20,166 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.0954, 2.5017, 2.0002, 1.5301, 2.4209, 2.7677, 2.6900, 2.8832],
143
- device='cuda:0'), covar=tensor([0.1359, 0.1360, 0.1853, 0.2378, 0.0760, 0.1000, 0.0585, 0.0894],
144
- device='cuda:0'), in_proj_covar=tensor([0.0155, 0.0168, 0.0150, 0.0172, 0.0162, 0.0181, 0.0147, 0.0167],
145
- device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004],
146
- device='cuda:0')
147
- 2022-11-19 12:02:21,030 INFO [decode.py:456] batch 16/?, cuts processed until now is 1892
148
- 2022-11-19 12:02:24,579 INFO [decode.py:456] batch 18/?, cuts processed until now is 2020
149
- 2022-11-19 12:02:28,326 INFO [decode.py:456] batch 20/?, cuts processed until now is 2106
150
- 2022-11-19 12:02:30,724 INFO [decode.py:456] batch 22/?, cuts processed until now is 2362
151
- 2022-11-19 12:02:32,924 INFO [decode.py:456] batch 24/?, cuts processed until now is 2807
152
- 2022-11-19 12:02:35,835 INFO [decode.py:456] batch 26/?, cuts processed until now is 2952
153
- 2022-11-19 12:02:38,471 INFO [decode.py:456] batch 28/?, cuts processed until now is 3226
154
- 2022-11-19 12:02:41,104 INFO [decode.py:456] batch 30/?, cuts processed until now is 3381
155
- 2022-11-19 12:02:44,479 INFO [decode.py:456] batch 32/?, cuts processed until now is 3499
156
- 2022-11-19 12:02:47,316 INFO [decode.py:456] batch 34/?, cuts processed until now is 3673
157
- 2022-11-19 12:02:50,301 INFO [decode.py:456] batch 36/?, cuts processed until now is 3797
158
- 2022-11-19 12:02:53,132 INFO [decode.py:456] batch 38/?, cuts processed until now is 3948
159
- 2022-11-19 12:02:55,519 INFO [decode.py:456] batch 40/?, cuts processed until now is 4722
160
- 2022-11-19 12:02:58,064 INFO [decode.py:456] batch 42/?, cuts processed until now is 5007
161
- 2022-11-19 12:03:01,412 INFO [decode.py:456] batch 44/?, cuts processed until now is 5112
162
- 2022-11-19 12:03:04,820 INFO [decode.py:456] batch 46/?, cuts processed until now is 5219
163
- 2022-11-19 12:03:08,094 INFO [decode.py:456] batch 48/?, cuts processed until now is 5636
164
- 2022-11-19 12:03:10,741 INFO [decode.py:456] batch 50/?, cuts processed until now is 5842
165
- 2022-11-19 12:03:12,908 INFO [decode.py:456] batch 52/?, cuts processed until now is 6222
166
- 2022-11-19 12:03:15,175 INFO [decode.py:456] batch 54/?, cuts processed until now is 6766
167
- 2022-11-19 12:03:17,693 INFO [decode.py:456] batch 56/?, cuts processed until now is 7072
168
- 2022-11-19 12:03:19,987 INFO [decode.py:456] batch 58/?, cuts processed until now is 7518
169
- 2022-11-19 12:03:22,207 INFO [decode.py:456] batch 60/?, cuts processed until now is 8027
170
- 2022-11-19 12:03:24,463 INFO [decode.py:456] batch 62/?, cuts processed until now is 8571
171
- 2022-11-19 12:03:26,985 INFO [decode.py:456] batch 64/?, cuts processed until now is 9115
172
- 2022-11-19 12:03:29,411 INFO [decode.py:456] batch 66/?, cuts processed until now is 9395
173
- 2022-11-19 12:03:31,595 INFO [decode.py:456] batch 68/?, cuts processed until now is 9904
174
- 2022-11-19 12:03:33,692 INFO [decode.py:456] batch 70/?, cuts processed until now is 10413
175
- 2022-11-19 12:03:35,750 INFO [decode.py:456] batch 72/?, cuts processed until now is 11190
176
- 2022-11-19 12:03:37,764 INFO [decode.py:456] batch 74/?, cuts processed until now is 11589
177
- 2022-11-19 12:03:39,450 INFO [decode.py:456] batch 76/?, cuts processed until now is 11699
178
- 2022-11-19 12:03:41,198 INFO [decode.py:456] batch 78/?, cuts processed until now is 11799
179
- 2022-11-19 12:03:42,663 INFO [decode.py:456] batch 80/?, cuts processed until now is 11889
180
- 2022-11-19 12:03:44,217 INFO [decode.py:456] batch 82/?, cuts processed until now is 12461
181
- 2022-11-19 12:03:45,966 INFO [decode.py:456] batch 84/?, cuts processed until now is 12568
182
- 2022-11-19 12:03:48,945 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.8918, 4.0775, 4.0075, 2.0122, 3.9736, 4.2795, 4.1332, 4.6819],
183
- device='cuda:0'), covar=tensor([0.1667, 0.1013, 0.0762, 0.2696, 0.0253, 0.0403, 0.0447, 0.0409],
184
- device='cuda:0'), in_proj_covar=tensor([0.0155, 0.0168, 0.0150, 0.0172, 0.0162, 0.0181, 0.0147, 0.0167],
185
- device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004],
186
- device='cuda:0')
187
- 2022-11-19 12:03:50,431 INFO [decode.py:456] batch 86/?, cuts processed until now is 12601
188
- 2022-11-19 12:03:50,666 INFO [decode.py:472] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
189
- 2022-11-19 12:03:50,920 INFO [utils.py:531] [dev_sdm-beam_4_max_contexts_4_max_states_8] %WER 31.14% [29566 / 94940, 3772 ins, 8463 del, 17331 sub ]
190
- 2022-11-19 12:03:51,646 INFO [utils.py:531] [dev_sdm-beam_4_max_contexts_4_max_states_8] %WER 22.76% [84184 / 369873, 17170 ins, 36158 del, 30856 sub ]
191
- 2022-11-19 12:03:52,593 INFO [decode.py:498] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
192
- 2022-11-19 12:03:52,594 INFO [decode.py:518]
193
- For dev_sdm, WER/CER of different settings are:
194
- beam_4_max_contexts_4_max_states_8 31.14 22.76 best for dev_sdm
195
-
196
- 2022-11-19 12:03:52,598 INFO [decode.py:664] Decoding test_sdm
197
- 2022-11-19 12:03:56,370 INFO [decode.py:456] batch 0/?, cuts processed until now is 69
198
- 2022-11-19 12:03:59,640 INFO [decode.py:456] batch 2/?, cuts processed until now is 555
199
- 2022-11-19 12:04:02,579 INFO [decode.py:456] batch 4/?, cuts processed until now is 703
200
- 2022-11-19 12:04:05,520 INFO [decode.py:456] batch 6/?, cuts processed until now is 831
201
- 2022-11-19 12:04:08,460 INFO [decode.py:456] batch 8/?, cuts processed until now is 988
202
- 2022-11-19 12:04:12,348 INFO [decode.py:456] batch 10/?, cuts processed until now is 1096
203
- 2022-11-19 12:04:15,172 INFO [decode.py:456] batch 12/?, cuts processed until now is 1268
204
- 2022-11-19 12:04:17,532 INFO [decode.py:456] batch 14/?, cuts processed until now is 1533
205
- 2022-11-19 12:04:19,744 INFO [decode.py:456] batch 16/?, cuts processed until now is 1932
206
- 2022-11-19 12:04:22,636 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.4038, 4.1485, 3.0703, 4.0386, 3.3197, 3.0083, 2.3652, 3.5807],
207
- device='cuda:0'), covar=tensor([0.1479, 0.0211, 0.0951, 0.0264, 0.0733, 0.0969, 0.2025, 0.0382],
208
- device='cuda:0'), in_proj_covar=tensor([0.0147, 0.0124, 0.0146, 0.0129, 0.0160, 0.0156, 0.0151, 0.0141],
209
- device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0004, 0.0003],
210
- device='cuda:0')
211
- 2022-11-19 12:04:23,701 INFO [decode.py:456] batch 18/?, cuts processed until now is 2057
212
- 2022-11-19 12:04:28,284 INFO [decode.py:456] batch 20/?, cuts processed until now is 2126
213
- 2022-11-19 12:04:30,610 INFO [decode.py:456] batch 22/?, cuts processed until now is 2390
214
- 2022-11-19 12:04:32,781 INFO [decode.py:456] batch 24/?, cuts processed until now is 2858
215
- 2022-11-19 12:04:35,825 INFO [decode.py:456] batch 26/?, cuts processed until now is 2998
216
- 2022-11-19 12:04:38,591 INFO [decode.py:456] batch 28/?, cuts processed until now is 3280
217
- 2022-11-19 12:04:41,286 INFO [decode.py:456] batch 30/?, cuts processed until now is 3432
218
- 2022-11-19 12:04:45,347 INFO [decode.py:456] batch 32/?, cuts processed until now is 3537
219
- 2022-11-19 12:04:48,383 INFO [decode.py:456] batch 34/?, cuts processed until now is 3709
220
- 2022-11-19 12:04:51,412 INFO [decode.py:456] batch 36/?, cuts processed until now is 3825
221
- 2022-11-19 12:04:54,232 INFO [decode.py:456] batch 38/?, cuts processed until now is 3972
222
- 2022-11-19 12:04:57,615 INFO [decode.py:456] batch 40/?, cuts processed until now is 4410
223
- 2022-11-19 12:04:59,811 INFO [decode.py:456] batch 42/?, cuts processed until now is 5060
224
- 2022-11-19 12:05:02,276 INFO [decode.py:456] batch 44/?, cuts processed until now is 5546
225
- 2022-11-19 12:05:05,121 INFO [decode.py:456] batch 46/?, cuts processed until now is 5687
226
- 2022-11-19 12:05:07,666 INFO [decode.py:456] batch 48/?, cuts processed until now is 5893
227
- 2022-11-19 12:05:10,285 INFO [decode.py:456] batch 50/?, cuts processed until now is 6379
228
- 2022-11-19 12:05:12,640 INFO [decode.py:456] batch 52/?, cuts processed until now is 6713
229
- 2022-11-19 12:05:14,762 INFO [decode.py:456] batch 54/?, cuts processed until now is 7112
230
- 2022-11-19 12:05:18,809 INFO [decode.py:456] batch 56/?, cuts processed until now is 7298
231
- 2022-11-19 12:05:21,229 INFO [decode.py:456] batch 58/?, cuts processed until now is 8130
232
- 2022-11-19 12:05:25,620 INFO [decode.py:456] batch 60/?, cuts processed until now is 8273
233
- 2022-11-19 12:05:27,854 INFO [decode.py:456] batch 62/?, cuts processed until now is 8813
234
- 2022-11-19 12:05:30,171 INFO [decode.py:456] batch 64/?, cuts processed until now is 9353
235
- 2022-11-19 12:05:34,198 INFO [decode.py:456] batch 66/?, cuts processed until now is 9500
236
- 2022-11-19 12:05:37,668 INFO [decode.py:456] batch 68/?, cuts processed until now is 9944
237
- 2022-11-19 12:05:39,962 INFO [decode.py:456] batch 70/?, cuts processed until now is 10274
238
- 2022-11-19 12:05:40,109 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.9114, 3.1058, 3.0254, 3.1113, 3.1652, 3.0739, 2.8505, 2.9252],
239
- device='cuda:0'), covar=tensor([0.0274, 0.0463, 0.0757, 0.0313, 0.0311, 0.0245, 0.0605, 0.0398],
240
- device='cuda:0'), in_proj_covar=tensor([0.0119, 0.0165, 0.0262, 0.0160, 0.0205, 0.0160, 0.0175, 0.0162],
241
- device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002],
242
- device='cuda:0')
243
- 2022-11-19 12:05:43,161 INFO [decode.py:456] batch 72/?, cuts processed until now is 10711
244
- 2022-11-19 12:05:45,488 INFO [decode.py:456] batch 74/?, cuts processed until now is 10820
245
- 2022-11-19 12:05:47,140 INFO [decode.py:456] batch 76/?, cuts processed until now is 11076
246
- 2022-11-19 12:05:48,230 INFO [decode.py:456] batch 78/?, cuts processed until now is 11209
247
- 2022-11-19 12:05:49,877 INFO [decode.py:456] batch 80/?, cuts processed until now is 11651
248
- 2022-11-19 12:05:50,055 INFO [zipformer.py:1411] attn_weights_entropy = tensor([1.2445, 1.7691, 1.5410, 1.6569, 1.8904, 2.3912, 1.9761, 1.8214],
249
- device='cuda:0'), covar=tensor([0.2463, 0.0429, 0.3108, 0.2256, 0.1364, 0.0250, 0.1293, 0.1741],
250
- device='cuda:0'), in_proj_covar=tensor([0.0089, 0.0080, 0.0079, 0.0089, 0.0065, 0.0055, 0.0065, 0.0077],
251
- device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002],
252
- device='cuda:0')
253
- 2022-11-19 12:05:52,270 INFO [decode.py:456] batch 82/?, cuts processed until now is 12070
254
- 2022-11-19 12:05:53,724 INFO [decode.py:472] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
255
- 2022-11-19 12:05:53,875 INFO [utils.py:531] [test_sdm-beam_4_max_contexts_4_max_states_8] %WER 32.52% [29153 / 89659, 3384 ins, 9146 del, 16623 sub ]
256
- 2022-11-19 12:05:54,687 INFO [utils.py:531] [test_sdm-beam_4_max_contexts_4_max_states_8] %WER 23.78% [84225 / 354205, 16274 ins, 38001 del, 29950 sub ]
257
- 2022-11-19 12:05:55,678 INFO [decode.py:498] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
258
- 2022-11-19 12:05:55,680 INFO [decode.py:518]
259
- For test_sdm, WER/CER of different settings are:
260
- beam_4_max_contexts_4_max_states_8 32.52 23.78 best for test_sdm
261
-
262
- 2022-11-19 12:05:55,684 INFO [decode.py:664] Decoding dev_gss
263
- 2022-11-19 12:05:58,735 INFO [decode.py:456] batch 0/?, cuts processed until now is 71
264
- 2022-11-19 12:06:01,525 INFO [decode.py:456] batch 2/?, cuts processed until now is 535
265
- 2022-11-19 12:06:04,466 INFO [decode.py:456] batch 4/?, cuts processed until now is 686
266
- 2022-11-19 12:06:07,447 INFO [decode.py:456] batch 6/?, cuts processed until now is 819
267
- 2022-11-19 12:06:10,237 INFO [decode.py:456] batch 8/?, cuts processed until now is 980
268
- 2022-11-19 12:06:15,672 INFO [decode.py:456] batch 10/?, cuts processed until now is 1083
269
- 2022-11-19 12:06:18,556 INFO [decode.py:456] batch 12/?, cuts processed until now is 1257
270
- 2022-11-19 12:06:21,107 INFO [decode.py:456] batch 14/?, cuts processed until now is 1513
271
- 2022-11-19 12:06:23,284 INFO [decode.py:456] batch 16/?, cuts processed until now is 1892
272
- 2022-11-19 12:06:27,032 INFO [decode.py:456] batch 18/?, cuts processed until now is 2020
273
- 2022-11-19 12:06:30,963 INFO [decode.py:456] batch 20/?, cuts processed until now is 2106
274
- 2022-11-19 12:06:33,370 INFO [decode.py:456] batch 22/?, cuts processed until now is 2362
275
- 2022-11-19 12:06:35,629 INFO [decode.py:456] batch 24/?, cuts processed until now is 2807
276
- 2022-11-19 12:06:38,773 INFO [decode.py:456] batch 26/?, cuts processed until now is 2952
277
- 2022-11-19 12:06:41,475 INFO [decode.py:456] batch 28/?, cuts processed until now is 3226
278
- 2022-11-19 12:06:44,385 INFO [decode.py:456] batch 30/?, cuts processed until now is 3381
279
- 2022-11-19 12:06:47,883 INFO [decode.py:456] batch 32/?, cuts processed until now is 3499
280
- 2022-11-19 12:06:50,916 INFO [decode.py:456] batch 34/?, cuts processed until now is 3673
281
- 2022-11-19 12:06:53,965 INFO [decode.py:456] batch 36/?, cuts processed until now is 3797
282
- 2022-11-19 12:06:56,868 INFO [decode.py:456] batch 38/?, cuts processed until now is 3948
283
- 2022-11-19 12:06:59,352 INFO [decode.py:456] batch 40/?, cuts processed until now is 4722
284
- 2022-11-19 12:07:02,038 INFO [decode.py:456] batch 42/?, cuts processed until now is 5007
285
- 2022-11-19 12:07:05,489 INFO [decode.py:456] batch 44/?, cuts processed until now is 5112
286
- 2022-11-19 12:07:08,880 INFO [decode.py:456] batch 46/?, cuts processed until now is 5219
287
- 2022-11-19 12:07:12,726 INFO [decode.py:456] batch 48/?, cuts processed until now is 5636
288
- 2022-11-19 12:07:15,554 INFO [decode.py:456] batch 50/?, cuts processed until now is 5842
289
- 2022-11-19 12:07:17,780 INFO [decode.py:456] batch 52/?, cuts processed until now is 6222
290
- 2022-11-19 12:07:17,973 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.4400, 3.9245, 2.9125, 1.9497, 3.5919, 1.6291, 3.2518, 2.3731],
291
- device='cuda:0'), covar=tensor([0.1357, 0.0146, 0.0875, 0.2236, 0.0221, 0.1806, 0.0374, 0.1393],
292
- device='cuda:0'), in_proj_covar=tensor([0.0113, 0.0095, 0.0106, 0.0107, 0.0092, 0.0114, 0.0091, 0.0105],
293
- device='cuda:0'), out_proj_covar=tensor([0.0005, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004],
294
- device='cuda:0')
295
- 2022-11-19 12:07:20,129 INFO [decode.py:456] batch 54/?, cuts processed until now is 6766
296
- 2022-11-19 12:07:22,726 INFO [decode.py:456] batch 56/?, cuts processed until now is 7072
297
- 2022-11-19 12:07:25,257 INFO [decode.py:456] batch 58/?, cuts processed until now is 7518
298
- 2022-11-19 12:07:27,685 INFO [decode.py:456] batch 60/?, cuts processed until now is 8027
299
- 2022-11-19 12:07:29,981 INFO [decode.py:456] batch 62/?, cuts processed until now is 8571
300
- 2022-11-19 12:07:30,119 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.9705, 3.1991, 3.1004, 3.2066, 3.2365, 3.1405, 2.8701, 3.0248],
301
- device='cuda:0'), covar=tensor([0.0274, 0.0332, 0.0728, 0.0303, 0.0320, 0.0258, 0.0755, 0.0400],
302
- device='cuda:0'), in_proj_covar=tensor([0.0119, 0.0165, 0.0262, 0.0160, 0.0205, 0.0160, 0.0175, 0.0162],
303
- device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002],
304
- device='cuda:0')
305
- 2022-11-19 12:07:32,538 INFO [decode.py:456] batch 64/?, cuts processed until now is 9115
306
- 2022-11-19 12:07:35,105 INFO [decode.py:456] batch 66/?, cuts processed until now is 9395
307
- 2022-11-19 12:07:37,426 INFO [decode.py:456] batch 68/?, cuts processed until now is 9904
308
- 2022-11-19 12:07:39,791 INFO [decode.py:456] batch 70/?, cuts processed until now is 10413
309
- 2022-11-19 12:07:42,275 INFO [decode.py:456] batch 72/?, cuts processed until now is 11190
310
- 2022-11-19 12:07:44,769 INFO [decode.py:456] batch 74/?, cuts processed until now is 11589
311
- 2022-11-19 12:07:46,693 INFO [decode.py:456] batch 76/?, cuts processed until now is 11699
312
- 2022-11-19 12:07:48,652 INFO [decode.py:456] batch 78/?, cuts processed until now is 11799
313
- 2022-11-19 12:07:50,346 INFO [decode.py:456] batch 80/?, cuts processed until now is 11889
314
- 2022-11-19 12:07:52,025 INFO [decode.py:456] batch 82/?, cuts processed until now is 12461
315
- 2022-11-19 12:07:54,095 INFO [decode.py:456] batch 84/?, cuts processed until now is 12568
316
- 2022-11-19 12:07:59,233 INFO [decode.py:456] batch 86/?, cuts processed until now is 12601
317
- 2022-11-19 12:07:59,490 INFO [decode.py:472] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
318
- 2022-11-19 12:07:59,799 INFO [utils.py:531] [dev_gss-beam_4_max_contexts_4_max_states_8] %WER 22.45% [21318 / 94940, 2659 ins, 4967 del, 13692 sub ]
319
- 2022-11-19 12:08:00,685 INFO [utils.py:531] [dev_gss-beam_4_max_contexts_4_max_states_8] %WER 14.81% [54769 / 369873, 11328 ins, 21762 del, 21679 sub ]
320
- 2022-11-19 12:08:01,838 INFO [decode.py:498] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
321
- 2022-11-19 12:08:01,840 INFO [decode.py:518]
322
- For dev_gss, WER/CER of different settings are:
323
- beam_4_max_contexts_4_max_states_8 22.45 14.81 best for dev_gss
324
-
325
- 2022-11-19 12:08:01,848 INFO [decode.py:664] Decoding test_gss
326
- 2022-11-19 12:08:04,914 INFO [decode.py:456] batch 0/?, cuts processed until now is 69
327
- 2022-11-19 12:08:07,716 INFO [decode.py:456] batch 2/?, cuts processed until now is 555
328
- 2022-11-19 12:08:10,492 INFO [decode.py:456] batch 4/?, cuts processed until now is 703
329
- 2022-11-19 12:08:13,414 INFO [decode.py:456] batch 6/?, cuts processed until now is 831
330
- 2022-11-19 12:08:16,011 INFO [decode.py:456] batch 8/?, cuts processed until now is 988
331
- 2022-11-19 12:08:19,693 INFO [decode.py:456] batch 10/?, cuts processed until now is 1096
332
- 2022-11-19 12:08:22,574 INFO [decode.py:456] batch 12/?, cuts processed until now is 1268
333
- 2022-11-19 12:08:24,976 INFO [decode.py:456] batch 14/?, cuts processed until now is 1533
334
- 2022-11-19 12:08:27,615 INFO [decode.py:456] batch 16/?, cuts processed until now is 1932
335
- 2022-11-19 12:08:31,972 INFO [decode.py:456] batch 18/?, cuts processed until now is 2057
336
- 2022-11-19 12:08:36,693 INFO [decode.py:456] batch 20/?, cuts processed until now is 2126
337
- 2022-11-19 12:08:39,094 INFO [decode.py:456] batch 22/?, cuts processed until now is 2390
338
- 2022-11-19 12:08:41,204 INFO [decode.py:456] batch 24/?, cuts processed until now is 2858
339
- 2022-11-19 12:08:44,256 INFO [decode.py:456] batch 26/?, cuts processed until now is 2998
340
- 2022-11-19 12:08:46,938 INFO [decode.py:456] batch 28/?, cuts processed until now is 3280
341
- 2022-11-19 12:08:49,719 INFO [decode.py:456] batch 30/?, cuts processed until now is 3432
342
- 2022-11-19 12:08:54,026 INFO [decode.py:456] batch 32/?, cuts processed until now is 3537
343
- 2022-11-19 12:08:57,029 INFO [decode.py:456] batch 34/?, cuts processed until now is 3709
344
- 2022-11-19 12:09:00,191 INFO [decode.py:456] batch 36/?, cuts processed until now is 3825
345
- 2022-11-19 12:09:03,137 INFO [decode.py:456] batch 38/?, cuts processed until now is 3972
346
- 2022-11-19 12:09:06,356 INFO [decode.py:456] batch 40/?, cuts processed until now is 4410
347
- 2022-11-19 12:09:08,836 INFO [decode.py:456] batch 42/?, cuts processed until now is 5060
348
- 2022-11-19 12:09:11,267 INFO [decode.py:456] batch 44/?, cuts processed until now is 5546
349
- 2022-11-19 12:09:14,175 INFO [decode.py:456] batch 46/?, cuts processed until now is 5687
350
- 2022-11-19 12:09:16,811 INFO [decode.py:456] batch 48/?, cuts processed until now is 5893
351
- 2022-11-19 12:09:19,425 INFO [decode.py:456] batch 50/?, cuts processed until now is 6379
352
- 2022-11-19 12:09:20,784 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.5938, 4.3109, 3.2093, 3.9648, 3.2693, 3.0485, 2.5234, 3.5965],
353
- device='cuda:0'), covar=tensor([0.1265, 0.0167, 0.0893, 0.0254, 0.0792, 0.0922, 0.1719, 0.0275],
354
- device='cuda:0'), in_proj_covar=tensor([0.0147, 0.0124, 0.0146, 0.0129, 0.0160, 0.0156, 0.0151, 0.0141],
355
- device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0004, 0.0003],
356
- device='cuda:0')
357
- 2022-11-19 12:09:21,839 INFO [decode.py:456] batch 52/?, cuts processed until now is 6713
358
- 2022-11-19 12:09:24,015 INFO [decode.py:456] batch 54/?, cuts processed until now is 7112
359
- 2022-11-19 12:09:28,198 INFO [decode.py:456] batch 56/?, cuts processed until now is 7298
360
- 2022-11-19 12:09:30,434 INFO [decode.py:456] batch 58/?, cuts processed until now is 8130
361
- 2022-11-19 12:09:34,783 INFO [decode.py:456] batch 60/?, cuts processed until now is 8273
362
- 2022-11-19 12:09:37,045 INFO [decode.py:456] batch 62/?, cuts processed until now is 8813
363
- 2022-11-19 12:09:39,493 INFO [decode.py:456] batch 64/?, cuts processed until now is 9353
364
- 2022-11-19 12:09:43,457 INFO [decode.py:456] batch 66/?, cuts processed until now is 9500
365
- 2022-11-19 12:09:46,848 INFO [decode.py:456] batch 68/?, cuts processed until now is 9944
366
- 2022-11-19 12:09:49,082 INFO [decode.py:456] batch 70/?, cuts processed until now is 10274
367
- 2022-11-19 12:09:52,092 INFO [decode.py:456] batch 72/?, cuts processed until now is 10711
368
- 2022-11-19 12:09:54,382 INFO [decode.py:456] batch 74/?, cuts processed until now is 10820
369
- 2022-11-19 12:09:56,179 INFO [decode.py:456] batch 76/?, cuts processed until now is 11076
370
- 2022-11-19 12:09:57,351 INFO [decode.py:456] batch 78/?, cuts processed until now is 11209
371
- 2022-11-19 12:09:59,115 INFO [decode.py:456] batch 80/?, cuts processed until now is 11651
372
- 2022-11-19 12:10:01,667 INFO [decode.py:456] batch 82/?, cuts processed until now is 12070
373
- 2022-11-19 12:10:03,209 INFO [decode.py:472] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
374
- 2022-11-19 12:10:03,388 INFO [utils.py:531] [test_gss-beam_4_max_contexts_4_max_states_8] %WER 23.38% [20960 / 89659, 2089 ins, 5792 del, 13079 sub ]
375
- 2022-11-19 12:10:04,277 INFO [utils.py:531] [test_gss-beam_4_max_contexts_4_max_states_8] %WER 15.71% [55662 / 354205, 10250 ins, 24389 del, 21023 sub ]
376
- 2022-11-19 12:10:05,352 INFO [decode.py:498] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
377
- 2022-11-19 12:10:05,353 INFO [decode.py:518]
378
- For test_gss, WER/CER of different settings are:
379
- beam_4_max_contexts_4_max_states_8 23.38 15.71 best for test_gss
380
-
381
- 2022-11-19 12:10:05,358 INFO [decode.py:681] Done!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
log/fast_beam_search/{recogs-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ recogs-dev_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} RENAMED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/{recogs-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ recogs-dev_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} RENAMED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/{recogs-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ recogs-dev_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} RENAMED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/{recogs-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ recogs-test_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} RENAMED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/{recogs-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ recogs-test_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} RENAMED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/{recogs-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ recogs-test_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} RENAMED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/wer-summary-dev_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER CER
2
+ beam_4_max_contexts_4_max_states_8 22.21 14.58
log/fast_beam_search/wer-summary-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt DELETED
@@ -1,2 +0,0 @@
1
- settings WER CER
2
- beam_4_max_contexts_4_max_states_8 22.45 14.81
 
 
 
log/fast_beam_search/wer-summary-dev_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER CER
2
+ beam_4_max_contexts_4_max_states_8 19.44 12.3
log/fast_beam_search/wer-summary-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt DELETED
@@ -1,2 +0,0 @@
1
- settings WER CER
2
- beam_4_max_contexts_4_max_states_8 19.46 12.39
 
 
 
log/fast_beam_search/wer-summary-dev_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER CER
2
+ beam_4_max_contexts_4_max_states_8 31.11 22.6
log/fast_beam_search/wer-summary-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt DELETED
@@ -1,2 +0,0 @@
1
- settings WER CER
2
- beam_4_max_contexts_4_max_states_8 31.14 22.76
 
 
 
log/fast_beam_search/wer-summary-test_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER CER
2
+ beam_4_max_contexts_4_max_states_8 22.83 15.27
log/fast_beam_search/wer-summary-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt DELETED
@@ -1,2 +0,0 @@
1
- settings WER CER
2
- beam_4_max_contexts_4_max_states_8 23.38 15.71
 
 
 
log/fast_beam_search/wer-summary-test_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER CER
2
+ beam_4_max_contexts_4_max_states_8 18.04 11.3
log/fast_beam_search/wer-summary-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt DELETED
@@ -1,2 +0,0 @@
1
- settings WER CER
2
- beam_4_max_contexts_4_max_states_8 18.35 11.5
 
 
 
log/fast_beam_search/wer-summary-test_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER CER
2
+ beam_4_max_contexts_4_max_states_8 32.1 23.5
log/fast_beam_search/wer-summary-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt DELETED
@@ -1,2 +0,0 @@
1
- settings WER CER
2
- beam_4_max_contexts_4_max_states_8 32.52 23.78
 
 
 
log/fast_beam_search/{wers-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ wers-dev_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} RENAMED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/{wers-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ wers-dev_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} RENAMED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/{wers-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ wers-dev_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} RENAMED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/{wers-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ wers-test_gss-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} RENAMED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/{wers-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ wers-test_ihm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} RENAMED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/{wers-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt β†’ wers-test_sdm-beam_4_max_contexts_4_max_states_8-epoch-14-avg-8-beam-4-max-contexts-4-max-states-8.txt} RENAMED
The diff for this file is too large to render. See raw diff
 
log/greedy_search/cers-dev_gss-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/greedy_search/cers-dev_ihm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/greedy_search/cers-dev_sdm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/greedy_search/cers-test_gss-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/greedy_search/cers-test_ihm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/greedy_search/cers-test_sdm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/greedy_search/log-decode-epoch-14-avg-8-context-2-max-sym-per-frame-1-2022-11-21-08-54-32 ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-11-21 08:54:32,239 INFO [decode.py:574] Decoding started
2
+ 2022-11-21 08:54:32,240 INFO [decode.py:580] Device: cuda:0
3
+ 2022-11-21 08:54:32,247 INFO [decode.py:590] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 100, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.21', 'k2-build-type': 'Debug', 'k2-with-cuda': True, 'k2-git-sha1': 'f271e82ef30f75fecbae44b163e1244e53def116', 'k2-git-date': 'Fri Oct 28 05:02:16 2022', 'lhotse-version': '1.9.0.dev+git.97bf4b0.dirty', 'torch-version': '1.10.0+cu111', 'torch-cuda-available': True, 'torch-cuda-version': '11.1', 'python-version': '3.8', 'icefall-git-branch': 'ami_recipe', 'icefall-git-sha1': 'd1b5a16-dirty', 'icefall-git-date': 'Sun Nov 20 22:32:57 2022', 'icefall-path': '/exp/draj/mini_scale_2022/icefall', 'k2-path': '/exp/draj/mini_scale_2022/k2/k2/python/k2/__init__.py', 'lhotse-path': '/exp/draj/mini_scale_2022/lhotse/lhotse/__init__.py', 'hostname': 'r2n06', 'IP address': '10.1.2.6'}, 'epoch': 14, 'iter': 0, 'avg': 8, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7/exp/v2'), 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'greedy_search', 'beam_size': 4, 'beam': 4, 'ngram_lm_scale': 0.01, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'manifest_dir': PosixPath('data/manifests'), 'enable_musan': True, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'max_duration': 500, 'max_cuts': None, 'num_buckets': 50, 'on_the_fly_feats': False, 'shuffle': True, 'num_workers': 8, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'ihm_only': False, 'res_dir': PosixPath('pruned_transducer_stateless7/exp/v2/greedy_search'), 'suffix': 'epoch-14-avg-8-context-2-max-sym-per-frame-1', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
4
+ 2022-11-21 08:54:32,247 INFO [decode.py:592] About to create model
5
+ 2022-11-21 08:54:32,735 INFO [zipformer.py:179] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
6
+ 2022-11-21 08:54:32,755 INFO [decode.py:659] Calculating the averaged model over epoch range from 6 (excluded) to 14
7
+ 2022-11-21 08:54:46,206 INFO [decode.py:694] Number of model parameters: 70369391
8
+ 2022-11-21 08:54:46,206 INFO [asr_datamodule.py:392] About to get AMI IHM dev cuts
9
+ 2022-11-21 08:54:46,208 INFO [asr_datamodule.py:413] About to get AMI IHM test cuts
10
+ 2022-11-21 08:54:46,209 INFO [asr_datamodule.py:398] About to get AMI SDM dev cuts
11
+ 2022-11-21 08:54:46,210 INFO [asr_datamodule.py:419] About to get AMI SDM test cuts
12
+ 2022-11-21 08:54:46,211 INFO [asr_datamodule.py:407] About to get AMI GSS-enhanced dev cuts
13
+ 2022-11-21 08:54:46,212 INFO [asr_datamodule.py:428] About to get AMI GSS-enhanced test cuts
14
+ 2022-11-21 08:54:48,328 INFO [decode.py:726] Decoding dev_ihm
15
+ 2022-11-21 08:54:50,509 INFO [decode.py:469] batch 0/?, cuts processed until now is 72
16
+ 2022-11-21 08:55:00,670 INFO [zipformer.py:1414] attn_weights_entropy = tensor([3.4141, 5.3941, 4.0147, 5.1413, 4.2180, 4.1089, 3.8119, 4.7968],
17
+ device='cuda:0'), covar=tensor([0.0981, 0.0178, 0.0671, 0.0197, 0.0484, 0.0609, 0.1299, 0.0177],
18
+ device='cuda:0'), in_proj_covar=tensor([0.0148, 0.0117, 0.0145, 0.0122, 0.0157, 0.0156, 0.0153, 0.0133],
19
+ device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0003],
20
+ device='cuda:0')
21
+ 2022-11-21 08:55:37,030 INFO [decode.py:485] The transcripts are stored in pruned_transducer_stateless7/exp/v2/greedy_search/recogs-dev_ihm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt
22
+ 2022-11-21 08:55:37,218 INFO [utils.py:530] [dev_ihm-greedy_search] %WER 19.25% [18280 / 94940, 2799 ins, 3599 del, 11882 sub ]
23
+ 2022-11-21 08:55:37,918 INFO [utils.py:530] [dev_ihm-greedy_search] %WER 12.01% [44413 / 369873, 10958 ins, 16172 del, 17283 sub ]
24
+ 2022-11-21 08:55:38,949 INFO [decode.py:511] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/greedy_search/wers-dev_ihm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt
25
+ 2022-11-21 08:55:38,950 INFO [decode.py:531]
26
+ For dev_ihm, WER/CER of different settings are:
27
+ greedy_search 19.25 12.01 best for dev_ihm
28
+
29
+ 2022-11-21 08:55:38,954 INFO [decode.py:726] Decoding test_ihm
30
+ 2022-11-21 08:55:40,998 INFO [decode.py:469] batch 0/?, cuts processed until now is 69
31
+ 2022-11-21 08:55:47,212 INFO [zipformer.py:1414] attn_weights_entropy = tensor([3.9011, 3.9179, 3.7190, 3.6107, 3.8823, 3.5968, 1.7153, 4.0888],
32
+ device='cuda:0'), covar=tensor([0.0182, 0.0175, 0.0218, 0.0272, 0.0220, 0.0263, 0.2729, 0.0168],
33
+ device='cuda:0'), in_proj_covar=tensor([0.0093, 0.0074, 0.0074, 0.0066, 0.0090, 0.0076, 0.0123, 0.0097],
34
+ device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002],
35
+ device='cuda:0')
36
+ 2022-11-21 08:55:58,872 INFO [zipformer.py:1414] attn_weights_entropy = tensor([2.3999, 2.3089, 2.9419, 1.7908, 1.0976, 3.2808, 2.6477, 2.3227],
37
+ device='cuda:0'), covar=tensor([0.0923, 0.1227, 0.0463, 0.3079, 0.4287, 0.1788, 0.1915, 0.1478],
38
+ device='cuda:0'), in_proj_covar=tensor([0.0076, 0.0065, 0.0066, 0.0079, 0.0058, 0.0046, 0.0055, 0.0066],
39
+ device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0001, 0.0001, 0.0002, 0.0001, 0.0001, 0.0001, 0.0002],
40
+ device='cuda:0')
41
+ 2022-11-21 08:56:28,680 INFO [decode.py:485] The transcripts are stored in pruned_transducer_stateless7/exp/v2/greedy_search/recogs-test_ihm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt
42
+ 2022-11-21 08:56:28,820 INFO [utils.py:530] [test_ihm-greedy_search] %WER 17.83% [15986 / 89659, 1991 ins, 3568 del, 10427 sub ]
43
+ 2022-11-21 08:56:29,570 INFO [utils.py:530] [test_ihm-greedy_search] %WER 10.95% [38776 / 354205, 8770 ins, 15207 del, 14799 sub ]
44
+ 2022-11-21 08:56:30,771 INFO [decode.py:511] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/greedy_search/wers-test_ihm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt
45
+ 2022-11-21 08:56:30,773 INFO [decode.py:531]
46
+ For test_ihm, WER/CER of different settings are:
47
+ greedy_search 17.83 10.95 best for test_ihm
48
+
49
+ 2022-11-21 08:56:30,784 INFO [decode.py:726] Decoding dev_sdm
50
+ 2022-11-21 08:56:32,521 INFO [decode.py:469] batch 0/?, cuts processed until now is 71
51
+ 2022-11-21 08:56:39,302 INFO [zipformer.py:1414] attn_weights_entropy = tensor([2.2504, 1.6006, 1.7922, 1.4734, 1.7332, 2.0485, 1.6161, 1.4893],
52
+ device='cuda:0'), covar=tensor([0.0036, 0.0058, 0.0063, 0.0051, 0.0107, 0.0068, 0.0035, 0.0053],
53
+ device='cuda:0'), in_proj_covar=tensor([0.0018, 0.0018, 0.0018, 0.0025, 0.0021, 0.0019, 0.0024, 0.0024],
54
+ device='cuda:0'), out_proj_covar=tensor([1.6600e-05, 1.6809e-05, 1.6082e-05, 2.4018e-05, 1.9485e-05, 1.8494e-05,
55
+ 2.3316e-05, 2.3270e-05], device='cuda:0')
56
+ 2022-11-21 08:57:04,693 INFO [zipformer.py:1414] attn_weights_entropy = tensor([3.4126, 3.5695, 3.3951, 3.2795, 3.4601, 3.1680, 1.4567, 3.5835],
57
+ device='cuda:0'), covar=tensor([0.0201, 0.0128, 0.0202, 0.0191, 0.0253, 0.0244, 0.2779, 0.0204],
58
+ device='cuda:0'), in_proj_covar=tensor([0.0093, 0.0074, 0.0074, 0.0066, 0.0090, 0.0076, 0.0123, 0.0097],
59
+ device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002],
60
+ device='cuda:0')
61
+ 2022-11-21 08:57:18,784 INFO [decode.py:485] The transcripts are stored in pruned_transducer_stateless7/exp/v2/greedy_search/recogs-dev_sdm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt
62
+ 2022-11-21 08:57:18,939 INFO [utils.py:530] [dev_sdm-greedy_search] %WER 31.32% [29731 / 94940, 4512 ins, 7044 del, 18175 sub ]
63
+ 2022-11-21 08:57:19,722 INFO [utils.py:530] [dev_sdm-greedy_search] %WER 22.44% [83014 / 369873, 19666 ins, 31039 del, 32309 sub ]
64
+ 2022-11-21 08:57:20,845 INFO [decode.py:511] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/greedy_search/wers-dev_sdm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt
65
+ 2022-11-21 08:57:20,866 INFO [decode.py:531]
66
+ For dev_sdm, WER/CER of different settings are:
67
+ greedy_search 31.32 22.44 best for dev_sdm
68
+
69
+ 2022-11-21 08:57:20,877 INFO [decode.py:726] Decoding test_sdm
70
+ 2022-11-21 08:57:22,664 INFO [decode.py:469] batch 0/?, cuts processed until now is 69
71
+ 2022-11-21 08:57:28,994 INFO [zipformer.py:1414] attn_weights_entropy = tensor([2.6137, 3.3224, 3.5908, 3.1441, 2.0380, 3.5516, 2.1972, 3.0422],
72
+ device='cuda:0'), covar=tensor([0.0359, 0.0196, 0.0141, 0.0322, 0.0512, 0.0161, 0.0531, 0.0158],
73
+ device='cuda:0'), in_proj_covar=tensor([0.0176, 0.0146, 0.0155, 0.0177, 0.0173, 0.0154, 0.0169, 0.0152],
74
+ device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002],
75
+ device='cuda:0')
76
+ 2022-11-21 08:58:09,662 INFO [decode.py:485] The transcripts are stored in pruned_transducer_stateless7/exp/v2/greedy_search/recogs-test_sdm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt
77
+ 2022-11-21 08:58:09,835 INFO [utils.py:530] [test_sdm-greedy_search] %WER 32.38% [29028 / 89659, 3955 ins, 7736 del, 17337 sub ]
78
+ 2022-11-21 08:58:10,588 INFO [utils.py:530] [test_sdm-greedy_search] %WER 23.44% [83036 / 354205, 18668 ins, 33128 del, 31240 sub ]
79
+ 2022-11-21 08:58:11,649 INFO [decode.py:511] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/greedy_search/wers-test_sdm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt
80
+ 2022-11-21 08:58:11,650 INFO [decode.py:531]
81
+ For test_sdm, WER/CER of different settings are:
82
+ greedy_search 32.38 23.44 best for test_sdm
83
+
84
+ 2022-11-21 08:58:11,654 INFO [decode.py:726] Decoding dev_gss
85
+ 2022-11-21 08:58:13,417 INFO [decode.py:469] batch 0/?, cuts processed until now is 71
86
+ 2022-11-21 08:59:00,020 INFO [decode.py:485] The transcripts are stored in pruned_transducer_stateless7/exp/v2/greedy_search/recogs-dev_gss-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt
87
+ 2022-11-21 08:59:00,182 INFO [utils.py:530] [dev_gss-greedy_search] %WER 22.05% [20935 / 94940, 2787 ins, 4483 del, 13665 sub ]
88
+ 2022-11-21 08:59:00,898 INFO [utils.py:530] [dev_gss-greedy_search] %WER 14.27% [52797 / 369873, 11721 ins, 19818 del, 21258 sub ]
89
+ 2022-11-21 08:59:01,847 INFO [decode.py:511] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/greedy_search/wers-dev_gss-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt
90
+ 2022-11-21 08:59:01,848 INFO [decode.py:531]
91
+ For dev_gss, WER/CER of different settings are:
92
+ greedy_search 22.05 14.27 best for dev_gss
93
+
94
+ 2022-11-21 08:59:01,853 INFO [decode.py:726] Decoding test_gss
95
+ 2022-11-21 08:59:03,727 INFO [decode.py:469] batch 0/?, cuts processed until now is 69
96
+ 2022-11-21 08:59:03,886 INFO [zipformer.py:1414] attn_weights_entropy = tensor([4.4828, 4.7358, 4.2268, 4.6778, 4.7580, 3.9699, 4.3828, 4.1917],
97
+ device='cuda:0'), covar=tensor([0.0160, 0.0243, 0.1101, 0.0280, 0.0272, 0.0336, 0.0256, 0.0375],
98
+ device='cuda:0'), in_proj_covar=tensor([0.0111, 0.0156, 0.0254, 0.0151, 0.0197, 0.0151, 0.0167, 0.0154],
99
+ device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002],
100
+ device='cuda:0')
101
+ 2022-11-21 08:59:51,508 INFO [decode.py:485] The transcripts are stored in pruned_transducer_stateless7/exp/v2/greedy_search/recogs-test_gss-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt
102
+ 2022-11-21 08:59:51,662 INFO [utils.py:530] [test_gss-greedy_search] %WER 22.93% [20560 / 89659, 2221 ins, 5099 del, 13240 sub ]
103
+ 2022-11-21 08:59:52,458 INFO [utils.py:530] [test_gss-greedy_search] %WER 15.12% [53541 / 354205, 10359 ins, 21954 del, 21228 sub ]
104
+ 2022-11-21 08:59:53,369 INFO [decode.py:511] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/greedy_search/wers-test_gss-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt
105
+ 2022-11-21 08:59:53,370 INFO [decode.py:531]
106
+ For test_gss, WER/CER of different settings are:
107
+ greedy_search 22.93 15.12 best for test_gss
108
+
109
+ 2022-11-21 08:59:53,375 INFO [decode.py:743] Done!
log/greedy_search/recogs-dev_gss-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/greedy_search/recogs-dev_ihm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/greedy_search/recogs-dev_sdm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/greedy_search/recogs-test_gss-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/greedy_search/recogs-test_ihm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/greedy_search/recogs-test_sdm-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/greedy_search/wer-summary-dev_gss-greedy_search-epoch-14-avg-8-context-2-max-sym-per-frame-1.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER CER
2
+ greedy_search 22.05 14.27