desh2608 commited on
Commit
9a63d31
1 Parent(s): 7b448d6

add pretrained model and logs for AMI

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +48 -3
  2. data/lang_bpe_500/bpe.model +3 -0
  3. exp/pretrained.pt +3 -0
  4. exp/tensorboard/events.out.tfevents.1668535898.r8n04.150063.0 +3 -0
  5. log/fast_beam_search/cers-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -0
  6. log/fast_beam_search/cers-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -0
  7. log/fast_beam_search/cers-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -0
  8. log/fast_beam_search/cers-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -0
  9. log/fast_beam_search/cers-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -0
  10. log/fast_beam_search/cers-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -0
  11. log/fast_beam_search/log-decode-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8-2022-11-19-11-56-30 +381 -0
  12. log/fast_beam_search/recogs-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -0
  13. log/fast_beam_search/recogs-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -0
  14. log/fast_beam_search/recogs-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -0
  15. log/fast_beam_search/recogs-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -0
  16. log/fast_beam_search/recogs-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -0
  17. log/fast_beam_search/recogs-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -0
  18. log/fast_beam_search/wer-summary-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +2 -0
  19. log/fast_beam_search/wer-summary-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +2 -0
  20. log/fast_beam_search/wer-summary-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +2 -0
  21. log/fast_beam_search/wer-summary-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +2 -0
  22. log/fast_beam_search/wer-summary-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +2 -0
  23. log/fast_beam_search/wer-summary-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +2 -0
  24. log/fast_beam_search/wers-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -0
  25. log/fast_beam_search/wers-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -0
  26. log/fast_beam_search/wers-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -0
  27. log/fast_beam_search/wers-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -0
  28. log/fast_beam_search/wers-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -0
  29. log/fast_beam_search/wers-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt +0 -0
  30. log/log-train-2022-11-15-13-11-38-0 +0 -0
  31. log/log-train-2022-11-15-13-11-38-1 +0 -0
  32. log/log-train-2022-11-15-13-11-38-2 +0 -0
  33. log/log-train-2022-11-15-13-11-38-3 +0 -0
  34. log/modified_beam_search/cers-dev_gss-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt +0 -0
  35. log/modified_beam_search/cers-dev_ihm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt +0 -0
  36. log/modified_beam_search/cers-dev_sdm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt +0 -0
  37. log/modified_beam_search/cers-test_gss-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt +0 -0
  38. log/modified_beam_search/cers-test_ihm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt +0 -0
  39. log/modified_beam_search/cers-test_sdm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt +0 -0
  40. log/modified_beam_search/log-decode-epoch-99-avg-1-modified_beam_search-beam-size-4-2022-11-19-12-54-02 +346 -0
  41. log/modified_beam_search/recogs-dev_gss-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt +0 -0
  42. log/modified_beam_search/recogs-dev_ihm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt +0 -0
  43. log/modified_beam_search/recogs-dev_sdm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt +0 -0
  44. log/modified_beam_search/recogs-test_gss-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt +0 -0
  45. log/modified_beam_search/recogs-test_ihm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt +0 -0
  46. log/modified_beam_search/recogs-test_sdm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt +0 -0
  47. log/modified_beam_search/wer-summary-dev_gss-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt +2 -0
  48. log/modified_beam_search/wer-summary-dev_ihm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt +2 -0
  49. log/modified_beam_search/wer-summary-dev_sdm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt +2 -0
  50. log/modified_beam_search/wer-summary-test_gss-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt +2 -0
README.md CHANGED
@@ -1,3 +1,48 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AMI
2
+
3
+ This is an ASR recipe for the AMI corpus. AMI provides recordings from the speaker's
4
+ headset and lapel microphones, and also 2 array microphones containing 8 channels each.
5
+ We pool data in the following 4 ways and train a single model on the pooled data:
6
+
7
+ (i) individual headset microphone (IHM)
8
+ (ii) IHM with simulated reverb
9
+ (iii) Single distant microphone (SDM)
10
+ (iv) GSS-enhanced array microphones
11
+
12
+ Speed perturbation and MUSAN noise augmentation are additionally performed on the pooled
13
+ data. Here are the statistics of the combined training data:
14
+
15
+ ```python
16
+ >>> cuts_train.describe()
17
+ Cuts count: 1222053
18
+ Total duration (hh:mm:ss): 905:00:28
19
+ Speech duration (hh:mm:ss): 905:00:28 (99.9%)
20
+ Duration statistics (seconds):
21
+ mean 2.7
22
+ std 2.8
23
+ min 0.0
24
+ 25% 0.6
25
+ 50% 1.6
26
+ 75% 3.8
27
+ 99% 12.3
28
+ 99.5% 13.9
29
+ 99.9% 18.4
30
+ max 36.8
31
+ ```
32
+
33
+ **Note:** This recipe additionally uses [GSS](https://github.com/desh2608/gss) for enhancement
34
+ of far-field array microphones, but this is optional (see `prepare.sh` for details).
35
+
36
+ ## Performance Record
37
+
38
+ ### pruned_transducer_stateless7
39
+
40
+ The following are decoded using `modified_beam_search`:
41
+
42
+ | Evaluation set | dev WER | test WER |
43
+ |--------------------------|------------|---------|
44
+ | IHM | 19.23 | 18.06 |
45
+ | SDM | 31.16 | 32.61 |
46
+ | MDM (GSS-enhanced) | 22.08 | 23.03 |
47
+
48
+ See [RESULTS](/egs/ami/ASR/RESULTS.md) for details.
data/lang_bpe_500/bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06b4c71aa31e7468ab3556e9382b2584db5e172002efe09f3905b2404df453d3
3
+ size 245589
exp/pretrained.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3017e7a9b374c004224288e50ea5456e0c5481dd7a442d88c6f358c3fa8e5de3
3
+ size 281766253
exp/tensorboard/events.out.tfevents.1668535898.r8n04.150063.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58c9cfadc75514d41d084191cf4d414c033bf22e3f77f73312fb1c171d4f33a2
3
+ size 515187
log/fast_beam_search/cers-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/cers-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/cers-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/cers-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/cers-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/cers-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/log-decode-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8-2022-11-19-11-56-30 ADDED
@@ -0,0 +1,381 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-11-19 11:56:30,475 INFO [decode.py:561] Decoding started
2
+ 2022-11-19 11:56:30,476 INFO [decode.py:567] Device: cuda:0
3
+ 2022-11-19 11:56:30,484 INFO [decode.py:577] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 100, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.21', 'k2-build-type': 'Debug', 'k2-with-cuda': True, 'k2-git-sha1': 'f271e82ef30f75fecbae44b163e1244e53def116', 'k2-git-date': 'Fri Oct 28 05:02:16 2022', 'lhotse-version': '1.9.0.dev+git.97bf4b0.dirty', 'torch-version': '1.10.0+cu111', 'torch-cuda-available': True, 'torch-cuda-version': '11.1', 'python-version': '3.8', 'icefall-git-branch': 'ami', 'icefall-git-sha1': 'c2c11ca-clean', 'icefall-git-date': 'Sat Nov 19 10:48:59 2022', 'icefall-path': '/exp/draj/mini_scale_2022/icefall', 'k2-path': '/exp/draj/mini_scale_2022/k2/k2/python/k2/__init__.py', 'lhotse-path': '/exp/draj/mini_scale_2022/lhotse/lhotse/__init__.py', 'hostname': 'r7n01', 'IP address': '10.1.7.1'}, 'epoch': 30, 'iter': 105000, 'avg': 10, 'exp_dir': PosixPath('pruned_transducer_stateless7/exp/v2'), 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 4, 'ngram_lm_scale': 0.01, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'manifest_dir': PosixPath('data/manifests'), 'enable_musan': True, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'max_duration': 500, 'max_cuts': None, 'num_buckets': 50, 'on_the_fly_feats': False, 'shuffle': True, 'num_workers': 8, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'ihm_only': False, 'res_dir': PosixPath('pruned_transducer_stateless7/exp/v2/fast_beam_search'), 'suffix': 'iter-105000-avg-10-beam-4-max-contexts-4-max-states-8', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
4
+ 2022-11-19 11:56:30,484 INFO [decode.py:579] About to create model
5
+ 2022-11-19 11:56:30,977 INFO [zipformer.py:176] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
6
+ 2022-11-19 11:56:30,993 INFO [decode.py:595] averaging ['pruned_transducer_stateless7/exp/v2/checkpoint-105000.pt', 'pruned_transducer_stateless7/exp/v2/checkpoint-100000.pt', 'pruned_transducer_stateless7/exp/v2/checkpoint-95000.pt', 'pruned_transducer_stateless7/exp/v2/checkpoint-90000.pt', 'pruned_transducer_stateless7/exp/v2/checkpoint-85000.pt', 'pruned_transducer_stateless7/exp/v2/checkpoint-80000.pt', 'pruned_transducer_stateless7/exp/v2/checkpoint-75000.pt', 'pruned_transducer_stateless7/exp/v2/checkpoint-70000.pt', 'pruned_transducer_stateless7/exp/v2/checkpoint-65000.pt', 'pruned_transducer_stateless7/exp/v2/checkpoint-60000.pt']
7
+ 2022-11-19 11:57:47,988 INFO [decode.py:632] Number of model parameters: 70369391
8
+ 2022-11-19 11:57:47,989 INFO [asr_datamodule.py:392] About to get AMI IHM dev cuts
9
+ 2022-11-19 11:57:48,019 INFO [asr_datamodule.py:413] About to get AMI IHM test cuts
10
+ 2022-11-19 11:57:48,021 INFO [asr_datamodule.py:398] About to get AMI SDM dev cuts
11
+ 2022-11-19 11:57:48,023 INFO [asr_datamodule.py:419] About to get AMI SDM test cuts
12
+ 2022-11-19 11:57:48,025 INFO [asr_datamodule.py:407] About to get AMI GSS-enhanced dev cuts
13
+ 2022-11-19 11:57:48,027 INFO [asr_datamodule.py:428] About to get AMI GSS-enhanced test cuts
14
+ 2022-11-19 11:57:50,158 INFO [decode.py:664] Decoding dev_ihm
15
+ 2022-11-19 11:57:53,749 INFO [decode.py:456] batch 0/?, cuts processed until now is 72
16
+ 2022-11-19 11:57:56,656 INFO [decode.py:456] batch 2/?, cuts processed until now is 537
17
+ 2022-11-19 11:57:59,459 INFO [decode.py:456] batch 4/?, cuts processed until now is 689
18
+ 2022-11-19 11:58:02,426 INFO [decode.py:456] batch 6/?, cuts processed until now is 823
19
+ 2022-11-19 11:58:05,194 INFO [decode.py:456] batch 8/?, cuts processed until now is 985
20
+ 2022-11-19 11:58:09,722 INFO [decode.py:456] batch 10/?, cuts processed until now is 1088
21
+ 2022-11-19 11:58:12,495 INFO [decode.py:456] batch 12/?, cuts processed until now is 1263
22
+ 2022-11-19 11:58:14,887 INFO [decode.py:456] batch 14/?, cuts processed until now is 1521
23
+ 2022-11-19 11:58:17,038 INFO [decode.py:456] batch 16/?, cuts processed until now is 1903
24
+ 2022-11-19 11:58:20,725 INFO [decode.py:456] batch 18/?, cuts processed until now is 2032
25
+ 2022-11-19 11:58:24,451 INFO [decode.py:456] batch 20/?, cuts processed until now is 2117
26
+ 2022-11-19 11:58:26,837 INFO [decode.py:456] batch 22/?, cuts processed until now is 2375
27
+ 2022-11-19 11:58:29,077 INFO [decode.py:456] batch 24/?, cuts processed until now is 2824
28
+ 2022-11-19 11:58:32,123 INFO [decode.py:456] batch 26/?, cuts processed until now is 2969
29
+ 2022-11-19 11:58:34,816 INFO [decode.py:456] batch 28/?, cuts processed until now is 3245
30
+ 2022-11-19 11:58:37,587 INFO [decode.py:456] batch 30/?, cuts processed until now is 3401
31
+ 2022-11-19 11:58:41,081 INFO [decode.py:456] batch 32/?, cuts processed until now is 3519
32
+ 2022-11-19 11:58:44,442 INFO [decode.py:456] batch 34/?, cuts processed until now is 3694
33
+ 2022-11-19 11:58:47,622 INFO [decode.py:456] batch 36/?, cuts processed until now is 3818
34
+ 2022-11-19 11:58:50,688 INFO [decode.py:456] batch 38/?, cuts processed until now is 3970
35
+ 2022-11-19 11:58:53,074 INFO [decode.py:456] batch 40/?, cuts processed until now is 4750
36
+ 2022-11-19 11:58:55,903 INFO [decode.py:456] batch 42/?, cuts processed until now is 5038
37
+ 2022-11-19 11:58:59,500 INFO [decode.py:456] batch 44/?, cuts processed until now is 5144
38
+ 2022-11-19 11:59:03,071 INFO [decode.py:456] batch 46/?, cuts processed until now is 5253
39
+ 2022-11-19 11:59:03,270 INFO [zipformer.py:1411] attn_weights_entropy = tensor([5.1287, 4.7557, 5.0205, 4.9174, 5.1172, 4.5533, 2.8637, 5.2734],
40
+ device='cuda:0'), covar=tensor([0.0140, 0.0272, 0.0134, 0.0164, 0.0143, 0.0264, 0.2165, 0.0151],
41
+ device='cuda:0'), in_proj_covar=tensor([0.0096, 0.0079, 0.0079, 0.0071, 0.0094, 0.0080, 0.0125, 0.0100],
42
+ device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002],
43
+ device='cuda:0')
44
+ 2022-11-19 11:59:06,333 INFO [decode.py:456] batch 48/?, cuts processed until now is 5672
45
+ 2022-11-19 11:59:09,188 INFO [decode.py:456] batch 50/?, cuts processed until now is 5878
46
+ 2022-11-19 11:59:11,298 INFO [decode.py:456] batch 52/?, cuts processed until now is 6260
47
+ 2022-11-19 11:59:12,535 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.2047, 3.1646, 2.3350, 1.8841, 3.0646, 1.3177, 2.7205, 1.9816],
48
+ device='cuda:0'), covar=tensor([0.0669, 0.0176, 0.0795, 0.0913, 0.0189, 0.1283, 0.0412, 0.0708],
49
+ device='cuda:0'), in_proj_covar=tensor([0.0113, 0.0095, 0.0106, 0.0107, 0.0092, 0.0114, 0.0091, 0.0105],
50
+ device='cuda:0'), out_proj_covar=tensor([0.0005, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004],
51
+ device='cuda:0')
52
+ 2022-11-19 11:59:13,402 INFO [decode.py:456] batch 54/?, cuts processed until now is 6808
53
+ 2022-11-19 11:59:15,807 INFO [decode.py:456] batch 56/?, cuts processed until now is 7117
54
+ 2022-11-19 11:59:18,242 INFO [decode.py:456] batch 58/?, cuts processed until now is 7565
55
+ 2022-11-19 11:59:20,405 INFO [decode.py:456] batch 60/?, cuts processed until now is 8078
56
+ 2022-11-19 11:59:22,576 INFO [decode.py:456] batch 62/?, cuts processed until now is 8626
57
+ 2022-11-19 11:59:24,898 INFO [decode.py:456] batch 64/?, cuts processed until now is 9174
58
+ 2022-11-19 11:59:28,041 INFO [decode.py:456] batch 66/?, cuts processed until now is 9455
59
+ 2022-11-19 11:59:30,361 INFO [decode.py:456] batch 68/?, cuts processed until now is 9968
60
+ 2022-11-19 11:59:32,614 INFO [decode.py:456] batch 70/?, cuts processed until now is 10481
61
+ 2022-11-19 11:59:34,793 INFO [decode.py:456] batch 72/?, cuts processed until now is 11264
62
+ 2022-11-19 11:59:36,997 INFO [decode.py:456] batch 74/?, cuts processed until now is 11669
63
+ 2022-11-19 11:59:38,779 INFO [decode.py:456] batch 76/?, cuts processed until now is 11761
64
+ 2022-11-19 11:59:40,511 INFO [decode.py:456] batch 78/?, cuts processed until now is 11843
65
+ 2022-11-19 11:59:42,240 INFO [decode.py:456] batch 80/?, cuts processed until now is 11956
66
+ 2022-11-19 11:59:43,775 INFO [decode.py:456] batch 82/?, cuts processed until now is 12467
67
+ 2022-11-19 11:59:48,114 INFO [decode.py:456] batch 84/?, cuts processed until now is 12586
68
+ 2022-11-19 11:59:50,291 INFO [decode.py:472] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
69
+ 2022-11-19 11:59:50,494 INFO [utils.py:531] [dev_ihm-beam_4_max_contexts_4_max_states_8] %WER 19.46% [18471 / 94940, 2582 ins, 4027 del, 11862 sub ]
70
+ 2022-11-19 11:59:51,358 INFO [utils.py:531] [dev_ihm-beam_4_max_contexts_4_max_states_8] %WER 12.39% [45842 / 369873, 10341 ins, 18060 del, 17441 sub ]
71
+ 2022-11-19 11:59:52,357 INFO [decode.py:498] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
72
+ 2022-11-19 11:59:52,358 INFO [decode.py:518]
73
+ For dev_ihm, WER/CER of different settings are:
74
+ beam_4_max_contexts_4_max_states_8 19.46 12.39 best for dev_ihm
75
+
76
+ 2022-11-19 11:59:52,363 INFO [decode.py:664] Decoding test_ihm
77
+ 2022-11-19 11:59:55,729 INFO [decode.py:456] batch 0/?, cuts processed until now is 69
78
+ 2022-11-19 11:59:58,708 INFO [decode.py:456] batch 2/?, cuts processed until now is 555
79
+ 2022-11-19 12:00:01,681 INFO [decode.py:456] batch 4/?, cuts processed until now is 703
80
+ 2022-11-19 12:00:04,573 INFO [decode.py:456] batch 6/?, cuts processed until now is 830
81
+ 2022-11-19 12:00:07,344 INFO [decode.py:456] batch 8/?, cuts processed until now is 987
82
+ 2022-11-19 12:00:11,335 INFO [decode.py:456] batch 10/?, cuts processed until now is 1095
83
+ 2022-11-19 12:00:14,147 INFO [decode.py:456] batch 12/?, cuts processed until now is 1267
84
+ 2022-11-19 12:00:16,507 INFO [decode.py:456] batch 14/?, cuts processed until now is 1532
85
+ 2022-11-19 12:00:18,652 INFO [decode.py:456] batch 16/?, cuts processed until now is 1931
86
+ 2022-11-19 12:00:22,624 INFO [decode.py:456] batch 18/?, cuts processed until now is 2055
87
+ 2022-11-19 12:00:27,410 INFO [decode.py:456] batch 20/?, cuts processed until now is 2124
88
+ 2022-11-19 12:00:29,878 INFO [decode.py:456] batch 22/?, cuts processed until now is 2388
89
+ 2022-11-19 12:00:32,061 INFO [decode.py:456] batch 24/?, cuts processed until now is 2856
90
+ 2022-11-19 12:00:35,040 INFO [decode.py:456] batch 26/?, cuts processed until now is 2996
91
+ 2022-11-19 12:00:37,754 INFO [decode.py:456] batch 28/?, cuts processed until now is 3278
92
+ 2022-11-19 12:00:40,482 INFO [decode.py:456] batch 30/?, cuts processed until now is 3430
93
+ 2022-11-19 12:00:44,585 INFO [decode.py:456] batch 32/?, cuts processed until now is 3535
94
+ 2022-11-19 12:00:47,584 INFO [decode.py:456] batch 34/?, cuts processed until now is 3706
95
+ 2022-11-19 12:00:50,600 INFO [decode.py:456] batch 36/?, cuts processed until now is 3822
96
+ 2022-11-19 12:00:53,463 INFO [decode.py:456] batch 38/?, cuts processed until now is 3969
97
+ 2022-11-19 12:00:56,851 INFO [decode.py:456] batch 40/?, cuts processed until now is 4411
98
+ 2022-11-19 12:00:59,476 INFO [decode.py:456] batch 42/?, cuts processed until now is 5058
99
+ 2022-11-19 12:01:01,973 INFO [decode.py:456] batch 44/?, cuts processed until now is 5544
100
+ 2022-11-19 12:01:04,877 INFO [decode.py:456] batch 46/?, cuts processed until now is 5685
101
+ 2022-11-19 12:01:07,431 INFO [decode.py:456] batch 48/?, cuts processed until now is 5890
102
+ 2022-11-19 12:01:10,191 INFO [decode.py:456] batch 50/?, cuts processed until now is 6372
103
+ 2022-11-19 12:01:12,556 INFO [decode.py:456] batch 52/?, cuts processed until now is 6706
104
+ 2022-11-19 12:01:14,692 INFO [decode.py:456] batch 54/?, cuts processed until now is 7105
105
+ 2022-11-19 12:01:18,461 INFO [decode.py:456] batch 56/?, cuts processed until now is 7290
106
+ 2022-11-19 12:01:20,652 INFO [decode.py:456] batch 58/?, cuts processed until now is 8116
107
+ 2022-11-19 12:01:24,751 INFO [decode.py:456] batch 60/?, cuts processed until now is 8258
108
+ 2022-11-19 12:01:26,847 INFO [decode.py:456] batch 62/?, cuts processed until now is 8794
109
+ 2022-11-19 12:01:29,089 INFO [decode.py:456] batch 64/?, cuts processed until now is 9330
110
+ 2022-11-19 12:01:32,761 INFO [decode.py:456] batch 66/?, cuts processed until now is 9476
111
+ 2022-11-19 12:01:35,866 INFO [decode.py:456] batch 68/?, cuts processed until now is 9921
112
+ 2022-11-19 12:01:37,922 INFO [decode.py:456] batch 70/?, cuts processed until now is 10251
113
+ 2022-11-19 12:01:41,636 INFO [decode.py:456] batch 72/?, cuts processed until now is 10679
114
+ 2022-11-19 12:01:44,398 INFO [decode.py:456] batch 74/?, cuts processed until now is 10794
115
+ 2022-11-19 12:01:46,138 INFO [decode.py:456] batch 76/?, cuts processed until now is 11039
116
+ 2022-11-19 12:01:47,209 INFO [decode.py:456] batch 78/?, cuts processed until now is 11155
117
+ 2022-11-19 12:01:48,861 INFO [decode.py:456] batch 80/?, cuts processed until now is 11600
118
+ 2022-11-19 12:01:51,326 INFO [decode.py:456] batch 82/?, cuts processed until now is 12041
119
+ 2022-11-19 12:01:52,764 INFO [decode.py:456] batch 84/?, cuts processed until now is 12110
120
+ 2022-11-19 12:01:53,020 INFO [decode.py:472] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
121
+ 2022-11-19 12:01:53,199 INFO [utils.py:531] [test_ihm-beam_4_max_contexts_4_max_states_8] %WER 18.35% [16449 / 89659, 1901 ins, 4044 del, 10504 sub ]
122
+ 2022-11-19 12:01:53,873 INFO [utils.py:531] [test_ihm-beam_4_max_contexts_4_max_states_8] %WER 11.50% [40727 / 354205, 8552 ins, 17081 del, 15094 sub ]
123
+ 2022-11-19 12:01:54,902 INFO [decode.py:498] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
124
+ 2022-11-19 12:01:54,903 INFO [decode.py:518]
125
+ For test_ihm, WER/CER of different settings are:
126
+ beam_4_max_contexts_4_max_states_8 18.35 11.5 best for test_ihm
127
+
128
+ 2022-11-19 12:01:54,908 INFO [decode.py:664] Decoding dev_sdm
129
+ 2022-11-19 12:01:58,144 INFO [decode.py:456] batch 0/?, cuts processed until now is 71
130
+ 2022-11-19 12:02:00,702 INFO [decode.py:456] batch 2/?, cuts processed until now is 535
131
+ 2022-11-19 12:02:03,476 INFO [decode.py:456] batch 4/?, cuts processed until now is 686
132
+ 2022-11-19 12:02:06,323 INFO [decode.py:456] batch 6/?, cuts processed until now is 819
133
+ 2022-11-19 12:02:07,881 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.0469, 1.5212, 1.9862, 1.1136, 1.9181, 1.8707, 1.4307, 1.7934],
134
+ device='cuda:0'), covar=tensor([0.0691, 0.0744, 0.0420, 0.1360, 0.1466, 0.0986, 0.1064, 0.0307],
135
+ device='cuda:0'), in_proj_covar=tensor([0.0013, 0.0021, 0.0014, 0.0018, 0.0014, 0.0013, 0.0019, 0.0013],
136
+ device='cuda:0'), out_proj_covar=tensor([7.0063e-05, 9.8332e-05, 7.2372e-05, 8.8655e-05, 7.6633e-05, 6.9844e-05,
137
+ 9.3643e-05, 6.9182e-05], device='cuda:0')
138
+ 2022-11-19 12:02:09,041 INFO [decode.py:456] batch 8/?, cuts processed until now is 980
139
+ 2022-11-19 12:02:13,345 INFO [decode.py:456] batch 10/?, cuts processed until now is 1083
140
+ 2022-11-19 12:02:16,238 INFO [decode.py:456] batch 12/?, cuts processed until now is 1257
141
+ 2022-11-19 12:02:18,758 INFO [decode.py:456] batch 14/?, cuts processed until now is 1513
142
+ 2022-11-19 12:02:20,166 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.0954, 2.5017, 2.0002, 1.5301, 2.4209, 2.7677, 2.6900, 2.8832],
143
+ device='cuda:0'), covar=tensor([0.1359, 0.1360, 0.1853, 0.2378, 0.0760, 0.1000, 0.0585, 0.0894],
144
+ device='cuda:0'), in_proj_covar=tensor([0.0155, 0.0168, 0.0150, 0.0172, 0.0162, 0.0181, 0.0147, 0.0167],
145
+ device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004],
146
+ device='cuda:0')
147
+ 2022-11-19 12:02:21,030 INFO [decode.py:456] batch 16/?, cuts processed until now is 1892
148
+ 2022-11-19 12:02:24,579 INFO [decode.py:456] batch 18/?, cuts processed until now is 2020
149
+ 2022-11-19 12:02:28,326 INFO [decode.py:456] batch 20/?, cuts processed until now is 2106
150
+ 2022-11-19 12:02:30,724 INFO [decode.py:456] batch 22/?, cuts processed until now is 2362
151
+ 2022-11-19 12:02:32,924 INFO [decode.py:456] batch 24/?, cuts processed until now is 2807
152
+ 2022-11-19 12:02:35,835 INFO [decode.py:456] batch 26/?, cuts processed until now is 2952
153
+ 2022-11-19 12:02:38,471 INFO [decode.py:456] batch 28/?, cuts processed until now is 3226
154
+ 2022-11-19 12:02:41,104 INFO [decode.py:456] batch 30/?, cuts processed until now is 3381
155
+ 2022-11-19 12:02:44,479 INFO [decode.py:456] batch 32/?, cuts processed until now is 3499
156
+ 2022-11-19 12:02:47,316 INFO [decode.py:456] batch 34/?, cuts processed until now is 3673
157
+ 2022-11-19 12:02:50,301 INFO [decode.py:456] batch 36/?, cuts processed until now is 3797
158
+ 2022-11-19 12:02:53,132 INFO [decode.py:456] batch 38/?, cuts processed until now is 3948
159
+ 2022-11-19 12:02:55,519 INFO [decode.py:456] batch 40/?, cuts processed until now is 4722
160
+ 2022-11-19 12:02:58,064 INFO [decode.py:456] batch 42/?, cuts processed until now is 5007
161
+ 2022-11-19 12:03:01,412 INFO [decode.py:456] batch 44/?, cuts processed until now is 5112
162
+ 2022-11-19 12:03:04,820 INFO [decode.py:456] batch 46/?, cuts processed until now is 5219
163
+ 2022-11-19 12:03:08,094 INFO [decode.py:456] batch 48/?, cuts processed until now is 5636
164
+ 2022-11-19 12:03:10,741 INFO [decode.py:456] batch 50/?, cuts processed until now is 5842
165
+ 2022-11-19 12:03:12,908 INFO [decode.py:456] batch 52/?, cuts processed until now is 6222
166
+ 2022-11-19 12:03:15,175 INFO [decode.py:456] batch 54/?, cuts processed until now is 6766
167
+ 2022-11-19 12:03:17,693 INFO [decode.py:456] batch 56/?, cuts processed until now is 7072
168
+ 2022-11-19 12:03:19,987 INFO [decode.py:456] batch 58/?, cuts processed until now is 7518
169
+ 2022-11-19 12:03:22,207 INFO [decode.py:456] batch 60/?, cuts processed until now is 8027
170
+ 2022-11-19 12:03:24,463 INFO [decode.py:456] batch 62/?, cuts processed until now is 8571
171
+ 2022-11-19 12:03:26,985 INFO [decode.py:456] batch 64/?, cuts processed until now is 9115
172
+ 2022-11-19 12:03:29,411 INFO [decode.py:456] batch 66/?, cuts processed until now is 9395
173
+ 2022-11-19 12:03:31,595 INFO [decode.py:456] batch 68/?, cuts processed until now is 9904
174
+ 2022-11-19 12:03:33,692 INFO [decode.py:456] batch 70/?, cuts processed until now is 10413
175
+ 2022-11-19 12:03:35,750 INFO [decode.py:456] batch 72/?, cuts processed until now is 11190
176
+ 2022-11-19 12:03:37,764 INFO [decode.py:456] batch 74/?, cuts processed until now is 11589
177
+ 2022-11-19 12:03:39,450 INFO [decode.py:456] batch 76/?, cuts processed until now is 11699
178
+ 2022-11-19 12:03:41,198 INFO [decode.py:456] batch 78/?, cuts processed until now is 11799
179
+ 2022-11-19 12:03:42,663 INFO [decode.py:456] batch 80/?, cuts processed until now is 11889
180
+ 2022-11-19 12:03:44,217 INFO [decode.py:456] batch 82/?, cuts processed until now is 12461
181
+ 2022-11-19 12:03:45,966 INFO [decode.py:456] batch 84/?, cuts processed until now is 12568
182
+ 2022-11-19 12:03:48,945 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.8918, 4.0775, 4.0075, 2.0122, 3.9736, 4.2795, 4.1332, 4.6819],
183
+ device='cuda:0'), covar=tensor([0.1667, 0.1013, 0.0762, 0.2696, 0.0253, 0.0403, 0.0447, 0.0409],
184
+ device='cuda:0'), in_proj_covar=tensor([0.0155, 0.0168, 0.0150, 0.0172, 0.0162, 0.0181, 0.0147, 0.0167],
185
+ device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004],
186
+ device='cuda:0')
187
+ 2022-11-19 12:03:50,431 INFO [decode.py:456] batch 86/?, cuts processed until now is 12601
188
+ 2022-11-19 12:03:50,666 INFO [decode.py:472] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
189
+ 2022-11-19 12:03:50,920 INFO [utils.py:531] [dev_sdm-beam_4_max_contexts_4_max_states_8] %WER 31.14% [29566 / 94940, 3772 ins, 8463 del, 17331 sub ]
190
+ 2022-11-19 12:03:51,646 INFO [utils.py:531] [dev_sdm-beam_4_max_contexts_4_max_states_8] %WER 22.76% [84184 / 369873, 17170 ins, 36158 del, 30856 sub ]
191
+ 2022-11-19 12:03:52,593 INFO [decode.py:498] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
192
+ 2022-11-19 12:03:52,594 INFO [decode.py:518]
193
+ For dev_sdm, WER/CER of different settings are:
194
+ beam_4_max_contexts_4_max_states_8 31.14 22.76 best for dev_sdm
195
+
196
+ 2022-11-19 12:03:52,598 INFO [decode.py:664] Decoding test_sdm
197
+ 2022-11-19 12:03:56,370 INFO [decode.py:456] batch 0/?, cuts processed until now is 69
198
+ 2022-11-19 12:03:59,640 INFO [decode.py:456] batch 2/?, cuts processed until now is 555
199
+ 2022-11-19 12:04:02,579 INFO [decode.py:456] batch 4/?, cuts processed until now is 703
200
+ 2022-11-19 12:04:05,520 INFO [decode.py:456] batch 6/?, cuts processed until now is 831
201
+ 2022-11-19 12:04:08,460 INFO [decode.py:456] batch 8/?, cuts processed until now is 988
202
+ 2022-11-19 12:04:12,348 INFO [decode.py:456] batch 10/?, cuts processed until now is 1096
203
+ 2022-11-19 12:04:15,172 INFO [decode.py:456] batch 12/?, cuts processed until now is 1268
204
+ 2022-11-19 12:04:17,532 INFO [decode.py:456] batch 14/?, cuts processed until now is 1533
205
+ 2022-11-19 12:04:19,744 INFO [decode.py:456] batch 16/?, cuts processed until now is 1932
206
+ 2022-11-19 12:04:22,636 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.4038, 4.1485, 3.0703, 4.0386, 3.3197, 3.0083, 2.3652, 3.5807],
207
+ device='cuda:0'), covar=tensor([0.1479, 0.0211, 0.0951, 0.0264, 0.0733, 0.0969, 0.2025, 0.0382],
208
+ device='cuda:0'), in_proj_covar=tensor([0.0147, 0.0124, 0.0146, 0.0129, 0.0160, 0.0156, 0.0151, 0.0141],
209
+ device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0004, 0.0003],
210
+ device='cuda:0')
211
+ 2022-11-19 12:04:23,701 INFO [decode.py:456] batch 18/?, cuts processed until now is 2057
212
+ 2022-11-19 12:04:28,284 INFO [decode.py:456] batch 20/?, cuts processed until now is 2126
213
+ 2022-11-19 12:04:30,610 INFO [decode.py:456] batch 22/?, cuts processed until now is 2390
214
+ 2022-11-19 12:04:32,781 INFO [decode.py:456] batch 24/?, cuts processed until now is 2858
215
+ 2022-11-19 12:04:35,825 INFO [decode.py:456] batch 26/?, cuts processed until now is 2998
216
+ 2022-11-19 12:04:38,591 INFO [decode.py:456] batch 28/?, cuts processed until now is 3280
217
+ 2022-11-19 12:04:41,286 INFO [decode.py:456] batch 30/?, cuts processed until now is 3432
218
+ 2022-11-19 12:04:45,347 INFO [decode.py:456] batch 32/?, cuts processed until now is 3537
219
+ 2022-11-19 12:04:48,383 INFO [decode.py:456] batch 34/?, cuts processed until now is 3709
220
+ 2022-11-19 12:04:51,412 INFO [decode.py:456] batch 36/?, cuts processed until now is 3825
221
+ 2022-11-19 12:04:54,232 INFO [decode.py:456] batch 38/?, cuts processed until now is 3972
222
+ 2022-11-19 12:04:57,615 INFO [decode.py:456] batch 40/?, cuts processed until now is 4410
223
+ 2022-11-19 12:04:59,811 INFO [decode.py:456] batch 42/?, cuts processed until now is 5060
224
+ 2022-11-19 12:05:02,276 INFO [decode.py:456] batch 44/?, cuts processed until now is 5546
225
+ 2022-11-19 12:05:05,121 INFO [decode.py:456] batch 46/?, cuts processed until now is 5687
226
+ 2022-11-19 12:05:07,666 INFO [decode.py:456] batch 48/?, cuts processed until now is 5893
227
+ 2022-11-19 12:05:10,285 INFO [decode.py:456] batch 50/?, cuts processed until now is 6379
228
+ 2022-11-19 12:05:12,640 INFO [decode.py:456] batch 52/?, cuts processed until now is 6713
229
+ 2022-11-19 12:05:14,762 INFO [decode.py:456] batch 54/?, cuts processed until now is 7112
230
+ 2022-11-19 12:05:18,809 INFO [decode.py:456] batch 56/?, cuts processed until now is 7298
231
+ 2022-11-19 12:05:21,229 INFO [decode.py:456] batch 58/?, cuts processed until now is 8130
232
+ 2022-11-19 12:05:25,620 INFO [decode.py:456] batch 60/?, cuts processed until now is 8273
233
+ 2022-11-19 12:05:27,854 INFO [decode.py:456] batch 62/?, cuts processed until now is 8813
234
+ 2022-11-19 12:05:30,171 INFO [decode.py:456] batch 64/?, cuts processed until now is 9353
235
+ 2022-11-19 12:05:34,198 INFO [decode.py:456] batch 66/?, cuts processed until now is 9500
236
+ 2022-11-19 12:05:37,668 INFO [decode.py:456] batch 68/?, cuts processed until now is 9944
237
+ 2022-11-19 12:05:39,962 INFO [decode.py:456] batch 70/?, cuts processed until now is 10274
238
+ 2022-11-19 12:05:40,109 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.9114, 3.1058, 3.0254, 3.1113, 3.1652, 3.0739, 2.8505, 2.9252],
239
+ device='cuda:0'), covar=tensor([0.0274, 0.0463, 0.0757, 0.0313, 0.0311, 0.0245, 0.0605, 0.0398],
240
+ device='cuda:0'), in_proj_covar=tensor([0.0119, 0.0165, 0.0262, 0.0160, 0.0205, 0.0160, 0.0175, 0.0162],
241
+ device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002],
242
+ device='cuda:0')
243
+ 2022-11-19 12:05:43,161 INFO [decode.py:456] batch 72/?, cuts processed until now is 10711
244
+ 2022-11-19 12:05:45,488 INFO [decode.py:456] batch 74/?, cuts processed until now is 10820
245
+ 2022-11-19 12:05:47,140 INFO [decode.py:456] batch 76/?, cuts processed until now is 11076
246
+ 2022-11-19 12:05:48,230 INFO [decode.py:456] batch 78/?, cuts processed until now is 11209
247
+ 2022-11-19 12:05:49,877 INFO [decode.py:456] batch 80/?, cuts processed until now is 11651
248
+ 2022-11-19 12:05:50,055 INFO [zipformer.py:1411] attn_weights_entropy = tensor([1.2445, 1.7691, 1.5410, 1.6569, 1.8904, 2.3912, 1.9761, 1.8214],
249
+ device='cuda:0'), covar=tensor([0.2463, 0.0429, 0.3108, 0.2256, 0.1364, 0.0250, 0.1293, 0.1741],
250
+ device='cuda:0'), in_proj_covar=tensor([0.0089, 0.0080, 0.0079, 0.0089, 0.0065, 0.0055, 0.0065, 0.0077],
251
+ device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002],
252
+ device='cuda:0')
253
+ 2022-11-19 12:05:52,270 INFO [decode.py:456] batch 82/?, cuts processed until now is 12070
254
+ 2022-11-19 12:05:53,724 INFO [decode.py:472] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
255
+ 2022-11-19 12:05:53,875 INFO [utils.py:531] [test_sdm-beam_4_max_contexts_4_max_states_8] %WER 32.52% [29153 / 89659, 3384 ins, 9146 del, 16623 sub ]
256
+ 2022-11-19 12:05:54,687 INFO [utils.py:531] [test_sdm-beam_4_max_contexts_4_max_states_8] %WER 23.78% [84225 / 354205, 16274 ins, 38001 del, 29950 sub ]
257
+ 2022-11-19 12:05:55,678 INFO [decode.py:498] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
258
+ 2022-11-19 12:05:55,680 INFO [decode.py:518]
259
+ For test_sdm, WER/CER of different settings are:
260
+ beam_4_max_contexts_4_max_states_8 32.52 23.78 best for test_sdm
261
+
262
+ 2022-11-19 12:05:55,684 INFO [decode.py:664] Decoding dev_gss
263
+ 2022-11-19 12:05:58,735 INFO [decode.py:456] batch 0/?, cuts processed until now is 71
264
+ 2022-11-19 12:06:01,525 INFO [decode.py:456] batch 2/?, cuts processed until now is 535
265
+ 2022-11-19 12:06:04,466 INFO [decode.py:456] batch 4/?, cuts processed until now is 686
266
+ 2022-11-19 12:06:07,447 INFO [decode.py:456] batch 6/?, cuts processed until now is 819
267
+ 2022-11-19 12:06:10,237 INFO [decode.py:456] batch 8/?, cuts processed until now is 980
268
+ 2022-11-19 12:06:15,672 INFO [decode.py:456] batch 10/?, cuts processed until now is 1083
269
+ 2022-11-19 12:06:18,556 INFO [decode.py:456] batch 12/?, cuts processed until now is 1257
270
+ 2022-11-19 12:06:21,107 INFO [decode.py:456] batch 14/?, cuts processed until now is 1513
271
+ 2022-11-19 12:06:23,284 INFO [decode.py:456] batch 16/?, cuts processed until now is 1892
272
+ 2022-11-19 12:06:27,032 INFO [decode.py:456] batch 18/?, cuts processed until now is 2020
273
+ 2022-11-19 12:06:30,963 INFO [decode.py:456] batch 20/?, cuts processed until now is 2106
274
+ 2022-11-19 12:06:33,370 INFO [decode.py:456] batch 22/?, cuts processed until now is 2362
275
+ 2022-11-19 12:06:35,629 INFO [decode.py:456] batch 24/?, cuts processed until now is 2807
276
+ 2022-11-19 12:06:38,773 INFO [decode.py:456] batch 26/?, cuts processed until now is 2952
277
+ 2022-11-19 12:06:41,475 INFO [decode.py:456] batch 28/?, cuts processed until now is 3226
278
+ 2022-11-19 12:06:44,385 INFO [decode.py:456] batch 30/?, cuts processed until now is 3381
279
+ 2022-11-19 12:06:47,883 INFO [decode.py:456] batch 32/?, cuts processed until now is 3499
280
+ 2022-11-19 12:06:50,916 INFO [decode.py:456] batch 34/?, cuts processed until now is 3673
281
+ 2022-11-19 12:06:53,965 INFO [decode.py:456] batch 36/?, cuts processed until now is 3797
282
+ 2022-11-19 12:06:56,868 INFO [decode.py:456] batch 38/?, cuts processed until now is 3948
283
+ 2022-11-19 12:06:59,352 INFO [decode.py:456] batch 40/?, cuts processed until now is 4722
284
+ 2022-11-19 12:07:02,038 INFO [decode.py:456] batch 42/?, cuts processed until now is 5007
285
+ 2022-11-19 12:07:05,489 INFO [decode.py:456] batch 44/?, cuts processed until now is 5112
286
+ 2022-11-19 12:07:08,880 INFO [decode.py:456] batch 46/?, cuts processed until now is 5219
287
+ 2022-11-19 12:07:12,726 INFO [decode.py:456] batch 48/?, cuts processed until now is 5636
288
+ 2022-11-19 12:07:15,554 INFO [decode.py:456] batch 50/?, cuts processed until now is 5842
289
+ 2022-11-19 12:07:17,780 INFO [decode.py:456] batch 52/?, cuts processed until now is 6222
290
+ 2022-11-19 12:07:17,973 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.4400, 3.9245, 2.9125, 1.9497, 3.5919, 1.6291, 3.2518, 2.3731],
291
+ device='cuda:0'), covar=tensor([0.1357, 0.0146, 0.0875, 0.2236, 0.0221, 0.1806, 0.0374, 0.1393],
292
+ device='cuda:0'), in_proj_covar=tensor([0.0113, 0.0095, 0.0106, 0.0107, 0.0092, 0.0114, 0.0091, 0.0105],
293
+ device='cuda:0'), out_proj_covar=tensor([0.0005, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004],
294
+ device='cuda:0')
295
+ 2022-11-19 12:07:20,129 INFO [decode.py:456] batch 54/?, cuts processed until now is 6766
296
+ 2022-11-19 12:07:22,726 INFO [decode.py:456] batch 56/?, cuts processed until now is 7072
297
+ 2022-11-19 12:07:25,257 INFO [decode.py:456] batch 58/?, cuts processed until now is 7518
298
+ 2022-11-19 12:07:27,685 INFO [decode.py:456] batch 60/?, cuts processed until now is 8027
299
+ 2022-11-19 12:07:29,981 INFO [decode.py:456] batch 62/?, cuts processed until now is 8571
300
+ 2022-11-19 12:07:30,119 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.9705, 3.1991, 3.1004, 3.2066, 3.2365, 3.1405, 2.8701, 3.0248],
301
+ device='cuda:0'), covar=tensor([0.0274, 0.0332, 0.0728, 0.0303, 0.0320, 0.0258, 0.0755, 0.0400],
302
+ device='cuda:0'), in_proj_covar=tensor([0.0119, 0.0165, 0.0262, 0.0160, 0.0205, 0.0160, 0.0175, 0.0162],
303
+ device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002],
304
+ device='cuda:0')
305
+ 2022-11-19 12:07:32,538 INFO [decode.py:456] batch 64/?, cuts processed until now is 9115
306
+ 2022-11-19 12:07:35,105 INFO [decode.py:456] batch 66/?, cuts processed until now is 9395
307
+ 2022-11-19 12:07:37,426 INFO [decode.py:456] batch 68/?, cuts processed until now is 9904
308
+ 2022-11-19 12:07:39,791 INFO [decode.py:456] batch 70/?, cuts processed until now is 10413
309
+ 2022-11-19 12:07:42,275 INFO [decode.py:456] batch 72/?, cuts processed until now is 11190
310
+ 2022-11-19 12:07:44,769 INFO [decode.py:456] batch 74/?, cuts processed until now is 11589
311
+ 2022-11-19 12:07:46,693 INFO [decode.py:456] batch 76/?, cuts processed until now is 11699
312
+ 2022-11-19 12:07:48,652 INFO [decode.py:456] batch 78/?, cuts processed until now is 11799
313
+ 2022-11-19 12:07:50,346 INFO [decode.py:456] batch 80/?, cuts processed until now is 11889
314
+ 2022-11-19 12:07:52,025 INFO [decode.py:456] batch 82/?, cuts processed until now is 12461
315
+ 2022-11-19 12:07:54,095 INFO [decode.py:456] batch 84/?, cuts processed until now is 12568
316
+ 2022-11-19 12:07:59,233 INFO [decode.py:456] batch 86/?, cuts processed until now is 12601
317
+ 2022-11-19 12:07:59,490 INFO [decode.py:472] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
318
+ 2022-11-19 12:07:59,799 INFO [utils.py:531] [dev_gss-beam_4_max_contexts_4_max_states_8] %WER 22.45% [21318 / 94940, 2659 ins, 4967 del, 13692 sub ]
319
+ 2022-11-19 12:08:00,685 INFO [utils.py:531] [dev_gss-beam_4_max_contexts_4_max_states_8] %WER 14.81% [54769 / 369873, 11328 ins, 21762 del, 21679 sub ]
320
+ 2022-11-19 12:08:01,838 INFO [decode.py:498] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
321
+ 2022-11-19 12:08:01,840 INFO [decode.py:518]
322
+ For dev_gss, WER/CER of different settings are:
323
+ beam_4_max_contexts_4_max_states_8 22.45 14.81 best for dev_gss
324
+
325
+ 2022-11-19 12:08:01,848 INFO [decode.py:664] Decoding test_gss
326
+ 2022-11-19 12:08:04,914 INFO [decode.py:456] batch 0/?, cuts processed until now is 69
327
+ 2022-11-19 12:08:07,716 INFO [decode.py:456] batch 2/?, cuts processed until now is 555
328
+ 2022-11-19 12:08:10,492 INFO [decode.py:456] batch 4/?, cuts processed until now is 703
329
+ 2022-11-19 12:08:13,414 INFO [decode.py:456] batch 6/?, cuts processed until now is 831
330
+ 2022-11-19 12:08:16,011 INFO [decode.py:456] batch 8/?, cuts processed until now is 988
331
+ 2022-11-19 12:08:19,693 INFO [decode.py:456] batch 10/?, cuts processed until now is 1096
332
+ 2022-11-19 12:08:22,574 INFO [decode.py:456] batch 12/?, cuts processed until now is 1268
333
+ 2022-11-19 12:08:24,976 INFO [decode.py:456] batch 14/?, cuts processed until now is 1533
334
+ 2022-11-19 12:08:27,615 INFO [decode.py:456] batch 16/?, cuts processed until now is 1932
335
+ 2022-11-19 12:08:31,972 INFO [decode.py:456] batch 18/?, cuts processed until now is 2057
336
+ 2022-11-19 12:08:36,693 INFO [decode.py:456] batch 20/?, cuts processed until now is 2126
337
+ 2022-11-19 12:08:39,094 INFO [decode.py:456] batch 22/?, cuts processed until now is 2390
338
+ 2022-11-19 12:08:41,204 INFO [decode.py:456] batch 24/?, cuts processed until now is 2858
339
+ 2022-11-19 12:08:44,256 INFO [decode.py:456] batch 26/?, cuts processed until now is 2998
340
+ 2022-11-19 12:08:46,938 INFO [decode.py:456] batch 28/?, cuts processed until now is 3280
341
+ 2022-11-19 12:08:49,719 INFO [decode.py:456] batch 30/?, cuts processed until now is 3432
342
+ 2022-11-19 12:08:54,026 INFO [decode.py:456] batch 32/?, cuts processed until now is 3537
343
+ 2022-11-19 12:08:57,029 INFO [decode.py:456] batch 34/?, cuts processed until now is 3709
344
+ 2022-11-19 12:09:00,191 INFO [decode.py:456] batch 36/?, cuts processed until now is 3825
345
+ 2022-11-19 12:09:03,137 INFO [decode.py:456] batch 38/?, cuts processed until now is 3972
346
+ 2022-11-19 12:09:06,356 INFO [decode.py:456] batch 40/?, cuts processed until now is 4410
347
+ 2022-11-19 12:09:08,836 INFO [decode.py:456] batch 42/?, cuts processed until now is 5060
348
+ 2022-11-19 12:09:11,267 INFO [decode.py:456] batch 44/?, cuts processed until now is 5546
349
+ 2022-11-19 12:09:14,175 INFO [decode.py:456] batch 46/?, cuts processed until now is 5687
350
+ 2022-11-19 12:09:16,811 INFO [decode.py:456] batch 48/?, cuts processed until now is 5893
351
+ 2022-11-19 12:09:19,425 INFO [decode.py:456] batch 50/?, cuts processed until now is 6379
352
+ 2022-11-19 12:09:20,784 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.5938, 4.3109, 3.2093, 3.9648, 3.2693, 3.0485, 2.5234, 3.5965],
353
+ device='cuda:0'), covar=tensor([0.1265, 0.0167, 0.0893, 0.0254, 0.0792, 0.0922, 0.1719, 0.0275],
354
+ device='cuda:0'), in_proj_covar=tensor([0.0147, 0.0124, 0.0146, 0.0129, 0.0160, 0.0156, 0.0151, 0.0141],
355
+ device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0004, 0.0003],
356
+ device='cuda:0')
357
+ 2022-11-19 12:09:21,839 INFO [decode.py:456] batch 52/?, cuts processed until now is 6713
358
+ 2022-11-19 12:09:24,015 INFO [decode.py:456] batch 54/?, cuts processed until now is 7112
359
+ 2022-11-19 12:09:28,198 INFO [decode.py:456] batch 56/?, cuts processed until now is 7298
360
+ 2022-11-19 12:09:30,434 INFO [decode.py:456] batch 58/?, cuts processed until now is 8130
361
+ 2022-11-19 12:09:34,783 INFO [decode.py:456] batch 60/?, cuts processed until now is 8273
362
+ 2022-11-19 12:09:37,045 INFO [decode.py:456] batch 62/?, cuts processed until now is 8813
363
+ 2022-11-19 12:09:39,493 INFO [decode.py:456] batch 64/?, cuts processed until now is 9353
364
+ 2022-11-19 12:09:43,457 INFO [decode.py:456] batch 66/?, cuts processed until now is 9500
365
+ 2022-11-19 12:09:46,848 INFO [decode.py:456] batch 68/?, cuts processed until now is 9944
366
+ 2022-11-19 12:09:49,082 INFO [decode.py:456] batch 70/?, cuts processed until now is 10274
367
+ 2022-11-19 12:09:52,092 INFO [decode.py:456] batch 72/?, cuts processed until now is 10711
368
+ 2022-11-19 12:09:54,382 INFO [decode.py:456] batch 74/?, cuts processed until now is 10820
369
+ 2022-11-19 12:09:56,179 INFO [decode.py:456] batch 76/?, cuts processed until now is 11076
370
+ 2022-11-19 12:09:57,351 INFO [decode.py:456] batch 78/?, cuts processed until now is 11209
371
+ 2022-11-19 12:09:59,115 INFO [decode.py:456] batch 80/?, cuts processed until now is 11651
372
+ 2022-11-19 12:10:01,667 INFO [decode.py:456] batch 82/?, cuts processed until now is 12070
373
+ 2022-11-19 12:10:03,209 INFO [decode.py:472] The transcripts are stored in pruned_transducer_stateless7/exp/v2/fast_beam_search/recogs-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
374
+ 2022-11-19 12:10:03,388 INFO [utils.py:531] [test_gss-beam_4_max_contexts_4_max_states_8] %WER 23.38% [20960 / 89659, 2089 ins, 5792 del, 13079 sub ]
375
+ 2022-11-19 12:10:04,277 INFO [utils.py:531] [test_gss-beam_4_max_contexts_4_max_states_8] %WER 15.71% [55662 / 354205, 10250 ins, 24389 del, 21023 sub ]
376
+ 2022-11-19 12:10:05,352 INFO [decode.py:498] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/fast_beam_search/wers-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt
377
+ 2022-11-19 12:10:05,353 INFO [decode.py:518]
378
+ For test_gss, WER/CER of different settings are:
379
+ beam_4_max_contexts_4_max_states_8 23.38 15.71 best for test_gss
380
+
381
+ 2022-11-19 12:10:05,358 INFO [decode.py:681] Done!
log/fast_beam_search/recogs-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/recogs-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/recogs-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/recogs-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/recogs-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/recogs-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/wer-summary-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER CER
2
+ beam_4_max_contexts_4_max_states_8 22.45 14.81
log/fast_beam_search/wer-summary-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER CER
2
+ beam_4_max_contexts_4_max_states_8 19.46 12.39
log/fast_beam_search/wer-summary-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER CER
2
+ beam_4_max_contexts_4_max_states_8 31.14 22.76
log/fast_beam_search/wer-summary-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER CER
2
+ beam_4_max_contexts_4_max_states_8 23.38 15.71
log/fast_beam_search/wer-summary-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER CER
2
+ beam_4_max_contexts_4_max_states_8 18.35 11.5
log/fast_beam_search/wer-summary-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER CER
2
+ beam_4_max_contexts_4_max_states_8 32.52 23.78
log/fast_beam_search/wers-dev_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/wers-dev_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/wers-dev_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/wers-test_gss-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/wers-test_ihm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/fast_beam_search/wers-test_sdm-beam_4_max_contexts_4_max_states_8-iter-105000-avg-10-beam-4-max-contexts-4-max-states-8.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/log-train-2022-11-15-13-11-38-0 ADDED
The diff for this file is too large to render. See raw diff
 
log/log-train-2022-11-15-13-11-38-1 ADDED
The diff for this file is too large to render. See raw diff
 
log/log-train-2022-11-15-13-11-38-2 ADDED
The diff for this file is too large to render. See raw diff
 
log/log-train-2022-11-15-13-11-38-3 ADDED
The diff for this file is too large to render. See raw diff
 
log/modified_beam_search/cers-dev_gss-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/modified_beam_search/cers-dev_ihm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/modified_beam_search/cers-dev_sdm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/modified_beam_search/cers-test_gss-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/modified_beam_search/cers-test_ihm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/modified_beam_search/cers-test_sdm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/modified_beam_search/log-decode-epoch-99-avg-1-modified_beam_search-beam-size-4-2022-11-19-12-54-02 ADDED
@@ -0,0 +1,346 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-11-19 12:54:02,151 INFO [decode.py:561] Decoding started
2
+ 2022-11-19 12:54:02,152 INFO [decode.py:567] Device: cuda:0
3
+ 2022-11-19 12:54:02,159 INFO [decode.py:577] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 100, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.21', 'k2-build-type': 'Debug', 'k2-with-cuda': True, 'k2-git-sha1': 'f271e82ef30f75fecbae44b163e1244e53def116', 'k2-git-date': 'Fri Oct 28 05:02:16 2022', 'lhotse-version': '1.9.0.dev+git.97bf4b0.dirty', 'torch-version': '1.10.0+cu111', 'torch-cuda-available': True, 'torch-cuda-version': '11.1', 'python-version': '3.8', 'icefall-git-branch': 'ami', 'icefall-git-sha1': 'c2c11ca-clean', 'icefall-git-date': 'Sat Nov 19 10:48:59 2022', 'icefall-path': '/exp/draj/mini_scale_2022/icefall', 'k2-path': '/exp/draj/mini_scale_2022/k2/k2/python/k2/__init__.py', 'lhotse-path': '/exp/draj/mini_scale_2022/lhotse/lhotse/__init__.py', 'hostname': 'r2n03', 'IP address': '10.1.2.3'}, 'epoch': 99, 'iter': 0, 'avg': 1, 'exp_dir': PosixPath('pruned_transducer_stateless7/exp/v2'), 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'modified_beam_search', 'beam_size': 4, 'beam': 4, 'ngram_lm_scale': 0.01, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'manifest_dir': PosixPath('data/manifests'), 'enable_musan': True, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'max_duration': 500, 'max_cuts': None, 'num_buckets': 50, 'on_the_fly_feats': False, 'shuffle': True, 'num_workers': 8, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'ihm_only': False, 'res_dir': PosixPath('pruned_transducer_stateless7/exp/v2/modified_beam_search'), 'suffix': 'epoch-99-avg-1-modified_beam_search-beam-size-4', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
4
+ 2022-11-19 12:54:02,160 INFO [decode.py:579] About to create model
5
+ 2022-11-19 12:54:02,584 INFO [zipformer.py:176] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
6
+ 2022-11-19 12:54:02,594 INFO [checkpoint.py:112] Loading checkpoint from pruned_transducer_stateless7/exp/v2/epoch-99.pt
7
+ 2022-11-19 12:54:08,970 INFO [decode.py:632] Number of model parameters: 70369391
8
+ 2022-11-19 12:54:08,970 INFO [asr_datamodule.py:392] About to get AMI IHM dev cuts
9
+ 2022-11-19 12:54:08,973 INFO [asr_datamodule.py:413] About to get AMI IHM test cuts
10
+ 2022-11-19 12:54:08,974 INFO [asr_datamodule.py:398] About to get AMI SDM dev cuts
11
+ 2022-11-19 12:54:08,975 INFO [asr_datamodule.py:419] About to get AMI SDM test cuts
12
+ 2022-11-19 12:54:08,976 INFO [asr_datamodule.py:407] About to get AMI GSS-enhanced dev cuts
13
+ 2022-11-19 12:54:08,977 INFO [asr_datamodule.py:428] About to get AMI GSS-enhanced test cuts
14
+ 2022-11-19 12:54:11,093 INFO [decode.py:664] Decoding dev_ihm
15
+ 2022-11-19 12:54:16,741 INFO [decode.py:456] batch 0/?, cuts processed until now is 72
16
+ 2022-11-19 12:54:23,033 INFO [decode.py:456] batch 2/?, cuts processed until now is 537
17
+ 2022-11-19 12:54:31,116 INFO [decode.py:456] batch 4/?, cuts processed until now is 689
18
+ 2022-11-19 12:54:39,025 INFO [decode.py:456] batch 6/?, cuts processed until now is 823
19
+ 2022-11-19 12:54:47,314 INFO [decode.py:456] batch 8/?, cuts processed until now is 985
20
+ 2022-11-19 12:54:54,813 INFO [decode.py:456] batch 10/?, cuts processed until now is 1088
21
+ 2022-11-19 12:55:02,435 INFO [decode.py:456] batch 12/?, cuts processed until now is 1263
22
+ 2022-11-19 12:55:10,097 INFO [decode.py:456] batch 14/?, cuts processed until now is 1521
23
+ 2022-11-19 12:55:17,357 INFO [decode.py:456] batch 16/?, cuts processed until now is 1903
24
+ 2022-11-19 12:55:25,117 INFO [decode.py:456] batch 18/?, cuts processed until now is 2032
25
+ 2022-11-19 12:55:32,850 INFO [decode.py:456] batch 20/?, cuts processed until now is 2117
26
+ 2022-11-19 12:55:40,478 INFO [decode.py:456] batch 22/?, cuts processed until now is 2375
27
+ 2022-11-19 12:55:47,576 INFO [decode.py:456] batch 24/?, cuts processed until now is 2824
28
+ 2022-11-19 12:55:55,949 INFO [decode.py:456] batch 26/?, cuts processed until now is 2969
29
+ 2022-11-19 12:56:03,677 INFO [decode.py:456] batch 28/?, cuts processed until now is 3245
30
+ 2022-11-19 12:56:11,959 INFO [decode.py:456] batch 30/?, cuts processed until now is 3401
31
+ 2022-11-19 12:56:16,027 INFO [zipformer.py:1411] attn_weights_entropy = tensor([4.6559, 3.0677, 3.6884, 4.5468, 4.6187, 3.6254, 3.2735, 4.6512],
32
+ device='cuda:0'), covar=tensor([0.0500, 0.3337, 0.2018, 0.2960, 0.0827, 0.3194, 0.2184, 0.0447],
33
+ device='cuda:0'), in_proj_covar=tensor([0.0228, 0.0185, 0.0175, 0.0289, 0.0205, 0.0190, 0.0175, 0.0223],
34
+ device='cuda:0'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0006, 0.0005, 0.0005, 0.0004, 0.0005],
35
+ device='cuda:0')
36
+ 2022-11-19 12:56:19,677 INFO [decode.py:456] batch 32/?, cuts processed until now is 3519
37
+ 2022-11-19 12:56:27,677 INFO [decode.py:456] batch 34/?, cuts processed until now is 3694
38
+ 2022-11-19 12:56:35,802 INFO [decode.py:456] batch 36/?, cuts processed until now is 3818
39
+ 2022-11-19 12:56:43,932 INFO [decode.py:456] batch 38/?, cuts processed until now is 3970
40
+ 2022-11-19 12:56:48,143 INFO [decode.py:456] batch 40/?, cuts processed until now is 4750
41
+ 2022-11-19 12:56:55,402 INFO [decode.py:456] batch 42/?, cuts processed until now is 5038
42
+ 2022-11-19 12:57:03,146 INFO [decode.py:456] batch 44/?, cuts processed until now is 5144
43
+ 2022-11-19 12:57:10,750 INFO [decode.py:456] batch 46/?, cuts processed until now is 5253
44
+ 2022-11-19 12:57:16,704 INFO [decode.py:456] batch 48/?, cuts processed until now is 5672
45
+ 2022-11-19 12:57:25,202 INFO [decode.py:456] batch 50/?, cuts processed until now is 5878
46
+ 2022-11-19 12:57:31,984 INFO [decode.py:456] batch 52/?, cuts processed until now is 6260
47
+ 2022-11-19 12:57:37,626 INFO [decode.py:456] batch 54/?, cuts processed until now is 6808
48
+ 2022-11-19 12:57:45,231 INFO [decode.py:456] batch 56/?, cuts processed until now is 7117
49
+ 2022-11-19 12:57:51,970 INFO [decode.py:456] batch 58/?, cuts processed until now is 7565
50
+ 2022-11-19 12:57:57,723 INFO [decode.py:456] batch 60/?, cuts processed until now is 8078
51
+ 2022-11-19 12:58:03,631 INFO [decode.py:456] batch 62/?, cuts processed until now is 8626
52
+ 2022-11-19 12:58:08,965 INFO [decode.py:456] batch 64/?, cuts processed until now is 9174
53
+ 2022-11-19 12:58:16,328 INFO [decode.py:456] batch 66/?, cuts processed until now is 9455
54
+ 2022-11-19 12:58:21,909 INFO [decode.py:456] batch 68/?, cuts processed until now is 9968
55
+ 2022-11-19 12:58:27,575 INFO [decode.py:456] batch 70/?, cuts processed until now is 10481
56
+ 2022-11-19 12:58:31,898 INFO [decode.py:456] batch 72/?, cuts processed until now is 11264
57
+ 2022-11-19 12:58:35,256 INFO [decode.py:456] batch 74/?, cuts processed until now is 11669
58
+ 2022-11-19 12:58:38,103 INFO [decode.py:456] batch 76/?, cuts processed until now is 11761
59
+ 2022-11-19 12:58:41,725 INFO [decode.py:456] batch 78/?, cuts processed until now is 11843
60
+ 2022-11-19 12:58:41,842 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.4958, 3.9087, 3.0948, 3.5424, 2.9032, 2.9524, 2.3525, 3.2757],
61
+ device='cuda:0'), covar=tensor([0.1316, 0.0197, 0.0685, 0.0387, 0.1168, 0.0841, 0.1668, 0.0395],
62
+ device='cuda:0'), in_proj_covar=tensor([0.0147, 0.0124, 0.0146, 0.0129, 0.0160, 0.0156, 0.0151, 0.0141],
63
+ device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0004, 0.0004, 0.0003],
64
+ device='cuda:0')
65
+ 2022-11-19 12:58:45,493 INFO [decode.py:456] batch 80/?, cuts processed until now is 11956
66
+ 2022-11-19 12:58:48,594 INFO [decode.py:456] batch 82/?, cuts processed until now is 12467
67
+ 2022-11-19 12:58:55,386 INFO [decode.py:456] batch 84/?, cuts processed until now is 12586
68
+ 2022-11-19 12:58:57,593 INFO [decode.py:472] The transcripts are stored in pruned_transducer_stateless7/exp/v2/modified_beam_search/recogs-dev_ihm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt
69
+ 2022-11-19 12:58:57,753 INFO [utils.py:531] [dev_ihm-beam_size_4] %WER 19.23% [18256 / 94940, 2870 ins, 3323 del, 12063 sub ]
70
+ 2022-11-19 12:58:58,486 INFO [utils.py:531] [dev_ihm-beam_size_4] %WER 12.01% [44428 / 369873, 11497 ins, 15114 del, 17817 sub ]
71
+ 2022-11-19 12:58:59,581 INFO [decode.py:498] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/modified_beam_search/wers-dev_ihm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt
72
+ 2022-11-19 12:58:59,582 INFO [decode.py:518]
73
+ For dev_ihm, WER/CER of different settings are:
74
+ beam_size_4 19.23 12.01 best for dev_ihm
75
+
76
+ 2022-11-19 12:58:59,589 INFO [decode.py:664] Decoding test_ihm
77
+ 2022-11-19 12:59:05,333 INFO [decode.py:456] batch 0/?, cuts processed until now is 69
78
+ 2022-11-19 12:59:11,431 INFO [decode.py:456] batch 2/?, cuts processed until now is 555
79
+ 2022-11-19 12:59:19,119 INFO [decode.py:456] batch 4/?, cuts processed until now is 703
80
+ 2022-11-19 12:59:26,918 INFO [decode.py:456] batch 6/?, cuts processed until now is 830
81
+ 2022-11-19 12:59:34,911 INFO [decode.py:456] batch 8/?, cuts processed until now is 987
82
+ 2022-11-19 12:59:42,620 INFO [decode.py:456] batch 10/?, cuts processed until now is 1095
83
+ 2022-11-19 12:59:50,594 INFO [decode.py:456] batch 12/?, cuts processed until now is 1267
84
+ 2022-11-19 12:59:57,726 INFO [decode.py:456] batch 14/?, cuts processed until now is 1532
85
+ 2022-11-19 13:00:04,384 INFO [decode.py:456] batch 16/?, cuts processed until now is 1931
86
+ 2022-11-19 13:00:12,175 INFO [decode.py:456] batch 18/?, cuts processed until now is 2055
87
+ 2022-11-19 13:00:20,185 INFO [decode.py:456] batch 20/?, cuts processed until now is 2124
88
+ 2022-11-19 13:00:27,974 INFO [decode.py:456] batch 22/?, cuts processed until now is 2388
89
+ 2022-11-19 13:00:34,954 INFO [decode.py:456] batch 24/?, cuts processed until now is 2856
90
+ 2022-11-19 13:00:43,267 INFO [decode.py:456] batch 26/?, cuts processed until now is 2996
91
+ 2022-11-19 13:00:50,703 INFO [decode.py:456] batch 28/?, cuts processed until now is 3278
92
+ 2022-11-19 13:00:58,778 INFO [decode.py:456] batch 30/?, cuts processed until now is 3430
93
+ 2022-11-19 13:01:06,753 INFO [decode.py:456] batch 32/?, cuts processed until now is 3535
94
+ 2022-11-19 13:01:14,299 INFO [decode.py:456] batch 34/?, cuts processed until now is 3706
95
+ 2022-11-19 13:01:22,542 INFO [decode.py:456] batch 36/?, cuts processed until now is 3822
96
+ 2022-11-19 13:01:30,038 INFO [decode.py:456] batch 38/?, cuts processed until now is 3969
97
+ 2022-11-19 13:01:36,656 INFO [decode.py:456] batch 40/?, cuts processed until now is 4411
98
+ 2022-11-19 13:01:42,322 INFO [decode.py:456] batch 42/?, cuts processed until now is 5058
99
+ 2022-11-19 13:01:48,194 INFO [decode.py:456] batch 44/?, cuts processed until now is 5544
100
+ 2022-11-19 13:01:56,168 INFO [decode.py:456] batch 46/?, cuts processed until now is 5685
101
+ 2022-11-19 13:02:03,891 INFO [decode.py:456] batch 48/?, cuts processed until now is 5890
102
+ 2022-11-19 13:02:10,233 INFO [decode.py:456] batch 50/?, cuts processed until now is 6372
103
+ 2022-11-19 13:02:17,221 INFO [decode.py:456] batch 52/?, cuts processed until now is 6706
104
+ 2022-11-19 13:02:24,344 INFO [decode.py:456] batch 54/?, cuts processed until now is 7105
105
+ 2022-11-19 13:02:31,778 INFO [decode.py:456] batch 56/?, cuts processed until now is 7290
106
+ 2022-11-19 13:02:35,644 INFO [decode.py:456] batch 58/?, cuts processed until now is 8116
107
+ 2022-11-19 13:02:42,624 INFO [decode.py:456] batch 60/?, cuts processed until now is 8258
108
+ 2022-11-19 13:02:48,076 INFO [decode.py:456] batch 62/?, cuts processed until now is 8794
109
+ 2022-11-19 13:02:53,598 INFO [decode.py:456] batch 64/?, cuts processed until now is 9330
110
+ 2022-11-19 13:03:00,513 INFO [decode.py:456] batch 66/?, cuts processed until now is 9476
111
+ 2022-11-19 13:03:05,754 INFO [decode.py:456] batch 68/?, cuts processed until now is 9921
112
+ 2022-11-19 13:03:12,407 INFO [decode.py:456] batch 70/?, cuts processed until now is 10251
113
+ 2022-11-19 13:03:16,452 INFO [decode.py:456] batch 72/?, cuts processed until now is 10679
114
+ 2022-11-19 13:03:22,110 INFO [decode.py:456] batch 74/?, cuts processed until now is 10794
115
+ 2022-11-19 13:03:25,805 INFO [decode.py:456] batch 76/?, cuts processed until now is 11039
116
+ 2022-11-19 13:03:28,132 INFO [decode.py:456] batch 78/?, cuts processed until now is 11155
117
+ 2022-11-19 13:03:30,975 INFO [decode.py:456] batch 80/?, cuts processed until now is 11600
118
+ 2022-11-19 13:03:35,003 INFO [decode.py:456] batch 82/?, cuts processed until now is 12041
119
+ 2022-11-19 13:03:37,811 INFO [decode.py:456] batch 84/?, cuts processed until now is 12110
120
+ 2022-11-19 13:03:37,991 INFO [decode.py:472] The transcripts are stored in pruned_transducer_stateless7/exp/v2/modified_beam_search/recogs-test_ihm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt
121
+ 2022-11-19 13:03:38,127 INFO [utils.py:531] [test_ihm-beam_size_4] %WER 18.06% [16188 / 89659, 2094 ins, 3408 del, 10686 sub ]
122
+ 2022-11-19 13:03:38,852 INFO [utils.py:531] [test_ihm-beam_size_4] %WER 11.13% [39423 / 354205, 9472 ins, 14545 del, 15406 sub ]
123
+ 2022-11-19 13:03:39,696 INFO [decode.py:498] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/modified_beam_search/wers-test_ihm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt
124
+ 2022-11-19 13:03:39,698 INFO [decode.py:518]
125
+ For test_ihm, WER/CER of different settings are:
126
+ beam_size_4 18.06 11.13 best for test_ihm
127
+
128
+ 2022-11-19 13:03:39,702 INFO [decode.py:664] Decoding dev_sdm
129
+ 2022-11-19 13:03:44,715 INFO [decode.py:456] batch 0/?, cuts processed until now is 71
130
+ 2022-11-19 13:03:50,656 INFO [decode.py:456] batch 2/?, cuts processed until now is 535
131
+ 2022-11-19 13:03:58,006 INFO [decode.py:456] batch 4/?, cuts processed until now is 686
132
+ 2022-11-19 13:04:05,549 INFO [decode.py:456] batch 6/?, cuts processed until now is 819
133
+ 2022-11-19 13:04:13,012 INFO [decode.py:456] batch 8/?, cuts processed until now is 980
134
+ 2022-11-19 13:04:20,057 INFO [decode.py:456] batch 10/?, cuts processed until now is 1083
135
+ 2022-11-19 13:04:27,059 INFO [decode.py:456] batch 12/?, cuts processed until now is 1257
136
+ 2022-11-19 13:04:34,051 INFO [decode.py:456] batch 14/?, cuts processed until now is 1513
137
+ 2022-11-19 13:04:40,276 INFO [decode.py:456] batch 16/?, cuts processed until now is 1892
138
+ 2022-11-19 13:04:47,403 INFO [decode.py:456] batch 18/?, cuts processed until now is 2020
139
+ 2022-11-19 13:04:54,669 INFO [decode.py:456] batch 20/?, cuts processed until now is 2106
140
+ 2022-11-19 13:05:01,732 INFO [decode.py:456] batch 22/?, cuts processed until now is 2362
141
+ 2022-11-19 13:05:07,790 INFO [decode.py:456] batch 24/?, cuts processed until now is 2807
142
+ 2022-11-19 13:05:15,350 INFO [decode.py:456] batch 26/?, cuts processed until now is 2952
143
+ 2022-11-19 13:05:22,296 INFO [decode.py:456] batch 28/?, cuts processed until now is 3226
144
+ 2022-11-19 13:05:29,771 INFO [decode.py:456] batch 30/?, cuts processed until now is 3381
145
+ 2022-11-19 13:05:36,948 INFO [decode.py:456] batch 32/?, cuts processed until now is 3499
146
+ 2022-11-19 13:05:44,262 INFO [decode.py:456] batch 34/?, cuts processed until now is 3673
147
+ 2022-11-19 13:05:52,006 INFO [decode.py:456] batch 36/?, cuts processed until now is 3797
148
+ 2022-11-19 13:05:59,233 INFO [decode.py:456] batch 38/?, cuts processed until now is 3948
149
+ 2022-11-19 13:06:03,524 INFO [decode.py:456] batch 40/?, cuts processed until now is 4722
150
+ 2022-11-19 13:06:10,440 INFO [decode.py:456] batch 42/?, cuts processed until now is 5007
151
+ 2022-11-19 13:06:17,466 INFO [decode.py:456] batch 44/?, cuts processed until now is 5112
152
+ 2022-11-19 13:06:24,812 INFO [decode.py:456] batch 46/?, cuts processed until now is 5219
153
+ 2022-11-19 13:06:29,997 INFO [decode.py:456] batch 48/?, cuts processed until now is 5636
154
+ 2022-11-19 13:06:37,129 INFO [decode.py:456] batch 50/?, cuts processed until now is 5842
155
+ 2022-11-19 13:06:43,402 INFO [decode.py:456] batch 52/?, cuts processed until now is 6222
156
+ 2022-11-19 13:06:48,446 INFO [decode.py:456] batch 54/?, cuts processed until now is 6766
157
+ 2022-11-19 13:06:55,071 INFO [decode.py:456] batch 56/?, cuts processed until now is 7072
158
+ 2022-11-19 13:07:01,247 INFO [decode.py:456] batch 58/?, cuts processed until now is 7518
159
+ 2022-11-19 13:07:06,422 INFO [decode.py:456] batch 60/?, cuts processed until now is 8027
160
+ 2022-11-19 13:07:11,459 INFO [decode.py:456] batch 62/?, cuts processed until now is 8571
161
+ 2022-11-19 13:07:16,642 INFO [decode.py:456] batch 64/?, cuts processed until now is 9115
162
+ 2022-11-19 13:07:23,323 INFO [decode.py:456] batch 66/?, cuts processed until now is 9395
163
+ 2022-11-19 13:07:28,391 INFO [decode.py:456] batch 68/?, cuts processed until now is 9904
164
+ 2022-11-19 13:07:33,414 INFO [decode.py:456] batch 70/?, cuts processed until now is 10413
165
+ 2022-11-19 13:07:37,463 INFO [decode.py:456] batch 72/?, cuts processed until now is 11190
166
+ 2022-11-19 13:07:40,469 INFO [decode.py:456] batch 74/?, cuts processed until now is 11589
167
+ 2022-11-19 13:07:43,091 INFO [decode.py:456] batch 76/?, cuts processed until now is 11699
168
+ 2022-11-19 13:07:46,815 INFO [decode.py:456] batch 78/?, cuts processed until now is 11799
169
+ 2022-11-19 13:07:49,836 INFO [decode.py:456] batch 80/?, cuts processed until now is 11889
170
+ 2022-11-19 13:07:52,917 INFO [decode.py:456] batch 82/?, cuts processed until now is 12461
171
+ 2022-11-19 13:07:56,930 INFO [decode.py:456] batch 84/?, cuts processed until now is 12568
172
+ 2022-11-19 13:08:01,920 INFO [decode.py:456] batch 86/?, cuts processed until now is 12601
173
+ 2022-11-19 13:08:02,120 INFO [decode.py:472] The transcripts are stored in pruned_transducer_stateless7/exp/v2/modified_beam_search/recogs-dev_sdm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt
174
+ 2022-11-19 13:08:02,304 INFO [utils.py:531] [dev_sdm-beam_size_4] %WER 31.16% [29584 / 94940, 4923 ins, 6221 del, 18440 sub ]
175
+ 2022-11-19 13:08:03,054 INFO [utils.py:531] [dev_sdm-beam_size_4] %WER 22.34% [82613 / 369873, 21552 ins, 27378 del, 33683 sub ]
176
+ 2022-11-19 13:08:04,137 INFO [decode.py:498] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/modified_beam_search/wers-dev_sdm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt
177
+ 2022-11-19 13:08:04,139 INFO [decode.py:518]
178
+ For dev_sdm, WER/CER of different settings are:
179
+ beam_size_4 31.16 22.34 best for dev_sdm
180
+
181
+ 2022-11-19 13:08:04,148 INFO [decode.py:664] Decoding test_sdm
182
+ 2022-11-19 13:08:09,082 INFO [decode.py:456] batch 0/?, cuts processed until now is 69
183
+ 2022-11-19 13:08:14,972 INFO [decode.py:456] batch 2/?, cuts processed until now is 555
184
+ 2022-11-19 13:08:22,066 INFO [decode.py:456] batch 4/?, cuts processed until now is 703
185
+ 2022-11-19 13:08:29,572 INFO [decode.py:456] batch 6/?, cuts processed until now is 831
186
+ 2022-11-19 13:08:36,824 INFO [decode.py:456] batch 8/?, cuts processed until now is 988
187
+ 2022-11-19 13:08:43,853 INFO [decode.py:456] batch 10/?, cuts processed until now is 1096
188
+ 2022-11-19 13:08:51,006 INFO [decode.py:456] batch 12/?, cuts processed until now is 1268
189
+ 2022-11-19 13:08:57,699 INFO [decode.py:456] batch 14/?, cuts processed until now is 1533
190
+ 2022-11-19 13:09:04,019 INFO [decode.py:456] batch 16/?, cuts processed until now is 1932
191
+ 2022-11-19 13:09:11,367 INFO [decode.py:456] batch 18/?, cuts processed until now is 2057
192
+ 2022-11-19 13:09:18,482 INFO [decode.py:456] batch 20/?, cuts processed until now is 2126
193
+ 2022-11-19 13:09:25,169 INFO [decode.py:456] batch 22/?, cuts processed until now is 2390
194
+ 2022-11-19 13:09:31,139 INFO [decode.py:456] batch 24/?, cuts processed until now is 2858
195
+ 2022-11-19 13:09:38,420 INFO [decode.py:456] batch 26/?, cuts processed until now is 2998
196
+ 2022-11-19 13:09:45,229 INFO [decode.py:456] batch 28/?, cuts processed until now is 3280
197
+ 2022-11-19 13:09:52,295 INFO [decode.py:456] batch 30/?, cuts processed until now is 3432
198
+ 2022-11-19 13:09:59,487 INFO [decode.py:456] batch 32/?, cuts processed until now is 3537
199
+ 2022-11-19 13:10:06,729 INFO [decode.py:456] batch 34/?, cuts processed until now is 3709
200
+ 2022-11-19 13:10:14,298 INFO [decode.py:456] batch 36/?, cuts processed until now is 3825
201
+ 2022-11-19 13:10:21,458 INFO [decode.py:456] batch 38/?, cuts processed until now is 3972
202
+ 2022-11-19 13:10:26,155 INFO [decode.py:456] batch 40/?, cuts processed until now is 4410
203
+ 2022-11-19 13:10:31,278 INFO [decode.py:456] batch 42/?, cuts processed until now is 5060
204
+ 2022-11-19 13:10:36,842 INFO [decode.py:456] batch 44/?, cuts processed until now is 5546
205
+ 2022-11-19 13:10:44,033 INFO [decode.py:456] batch 46/?, cuts processed until now is 5687
206
+ 2022-11-19 13:10:51,072 INFO [decode.py:456] batch 48/?, cuts processed until now is 5893
207
+ 2022-11-19 13:10:56,898 INFO [decode.py:456] batch 50/?, cuts processed until now is 6379
208
+ 2022-11-19 13:11:03,351 INFO [decode.py:456] batch 52/?, cuts processed until now is 6713
209
+ 2022-11-19 13:11:09,687 INFO [decode.py:456] batch 54/?, cuts processed until now is 7112
210
+ 2022-11-19 13:11:16,614 INFO [decode.py:456] batch 56/?, cuts processed until now is 7298
211
+ 2022-11-19 13:11:20,297 INFO [decode.py:456] batch 58/?, cuts processed until now is 8130
212
+ 2022-11-19 13:11:27,243 INFO [decode.py:456] batch 60/?, cuts processed until now is 8273
213
+ 2022-11-19 13:11:32,570 INFO [decode.py:456] batch 62/?, cuts processed until now is 8813
214
+ 2022-11-19 13:11:37,685 INFO [decode.py:456] batch 64/?, cuts processed until now is 9353
215
+ 2022-11-19 13:11:44,395 INFO [decode.py:456] batch 66/?, cuts processed until now is 9500
216
+ 2022-11-19 13:11:49,620 INFO [decode.py:456] batch 68/?, cuts processed until now is 9944
217
+ 2022-11-19 13:11:56,180 INFO [decode.py:456] batch 70/?, cuts processed until now is 10274
218
+ 2022-11-19 13:12:00,566 INFO [decode.py:456] batch 72/?, cuts processed until now is 10711
219
+ 2022-11-19 13:12:05,927 INFO [decode.py:456] batch 74/?, cuts processed until now is 10820
220
+ 2022-11-19 13:12:09,976 INFO [decode.py:456] batch 76/?, cuts processed until now is 11076
221
+ 2022-11-19 13:12:12,387 INFO [decode.py:456] batch 78/?, cuts processed until now is 11209
222
+ 2022-11-19 13:12:15,017 INFO [decode.py:456] batch 80/?, cuts processed until now is 11651
223
+ 2022-11-19 13:12:19,170 INFO [decode.py:456] batch 82/?, cuts processed until now is 12070
224
+ 2022-11-19 13:12:21,853 INFO [decode.py:472] The transcripts are stored in pruned_transducer_stateless7/exp/v2/modified_beam_search/recogs-test_sdm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt
225
+ 2022-11-19 13:12:22,090 INFO [utils.py:531] [test_sdm-beam_size_4] %WER 32.61% [29235 / 89659, 4435 ins, 7052 del, 17748 sub ]
226
+ 2022-11-19 13:12:22,766 INFO [utils.py:531] [test_sdm-beam_size_4] %WER 23.59% [83552 / 354205, 20989 ins, 29709 del, 32854 sub ]
227
+ 2022-11-19 13:12:23,740 INFO [decode.py:498] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/modified_beam_search/wers-test_sdm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt
228
+ 2022-11-19 13:12:23,742 INFO [decode.py:518]
229
+ For test_sdm, WER/CER of different settings are:
230
+ beam_size_4 32.61 23.59 best for test_sdm
231
+
232
+ 2022-11-19 13:12:23,745 INFO [decode.py:664] Decoding dev_gss
233
+ 2022-11-19 13:12:28,501 INFO [decode.py:456] batch 0/?, cuts processed until now is 71
234
+ 2022-11-19 13:12:34,360 INFO [decode.py:456] batch 2/?, cuts processed until now is 535
235
+ 2022-11-19 13:12:41,890 INFO [decode.py:456] batch 4/?, cuts processed until now is 686
236
+ 2022-11-19 13:12:49,720 INFO [decode.py:456] batch 6/?, cuts processed until now is 819
237
+ 2022-11-19 13:12:57,066 INFO [decode.py:456] batch 8/?, cuts processed until now is 980
238
+ 2022-11-19 13:13:04,457 INFO [decode.py:456] batch 10/?, cuts processed until now is 1083
239
+ 2022-11-19 13:13:11,699 INFO [decode.py:456] batch 12/?, cuts processed until now is 1257
240
+ 2022-11-19 13:13:18,723 INFO [decode.py:456] batch 14/?, cuts processed until now is 1513
241
+ 2022-11-19 13:13:24,816 INFO [decode.py:456] batch 16/?, cuts processed until now is 1892
242
+ 2022-11-19 13:13:31,763 INFO [decode.py:456] batch 18/?, cuts processed until now is 2020
243
+ 2022-11-19 13:13:38,902 INFO [decode.py:456] batch 20/?, cuts processed until now is 2106
244
+ 2022-11-19 13:13:45,639 INFO [decode.py:456] batch 22/?, cuts processed until now is 2362
245
+ 2022-11-19 13:13:51,813 INFO [decode.py:456] batch 24/?, cuts processed until now is 2807
246
+ 2022-11-19 13:13:59,106 INFO [decode.py:456] batch 26/?, cuts processed until now is 2952
247
+ 2022-11-19 13:14:06,253 INFO [decode.py:456] batch 28/?, cuts processed until now is 3226
248
+ 2022-11-19 13:14:13,520 INFO [decode.py:456] batch 30/?, cuts processed until now is 3381
249
+ 2022-11-19 13:14:20,833 INFO [decode.py:456] batch 32/?, cuts processed until now is 3499
250
+ 2022-11-19 13:14:27,882 INFO [decode.py:456] batch 34/?, cuts processed until now is 3673
251
+ 2022-11-19 13:14:35,590 INFO [decode.py:456] batch 36/?, cuts processed until now is 3797
252
+ 2022-11-19 13:14:42,922 INFO [decode.py:456] batch 38/?, cuts processed until now is 3948
253
+ 2022-11-19 13:14:47,034 INFO [decode.py:456] batch 40/?, cuts processed until now is 4722
254
+ 2022-11-19 13:14:53,867 INFO [decode.py:456] batch 42/?, cuts processed until now is 5007
255
+ 2022-11-19 13:15:00,943 INFO [decode.py:456] batch 44/?, cuts processed until now is 5112
256
+ 2022-11-19 13:15:08,047 INFO [decode.py:456] batch 46/?, cuts processed until now is 5219
257
+ 2022-11-19 13:15:13,106 INFO [decode.py:456] batch 48/?, cuts processed until now is 5636
258
+ 2022-11-19 13:15:17,115 INFO [zipformer.py:1411] attn_weights_entropy = tensor([2.1174, 1.9694, 2.7793, 1.8865, 1.6161, 3.0895, 2.4197, 2.1991],
259
+ device='cuda:0'), covar=tensor([0.1009, 0.2240, 0.0841, 0.3189, 0.4134, 0.0713, 0.1611, 0.2062],
260
+ device='cuda:0'), in_proj_covar=tensor([0.0089, 0.0080, 0.0079, 0.0089, 0.0065, 0.0055, 0.0065, 0.0077],
261
+ device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002],
262
+ device='cuda:0')
263
+ 2022-11-19 13:15:20,335 INFO [decode.py:456] batch 50/?, cuts processed until now is 5842
264
+ 2022-11-19 13:15:26,492 INFO [decode.py:456] batch 52/?, cuts processed until now is 6222
265
+ 2022-11-19 13:15:31,490 INFO [decode.py:456] batch 54/?, cuts processed until now is 6766
266
+ 2022-11-19 13:15:38,029 INFO [decode.py:456] batch 56/?, cuts processed until now is 7072
267
+ 2022-11-19 13:15:43,996 INFO [decode.py:456] batch 58/?, cuts processed until now is 7518
268
+ 2022-11-19 13:15:49,105 INFO [decode.py:456] batch 60/?, cuts processed until now is 8027
269
+ 2022-11-19 13:15:54,295 INFO [decode.py:456] batch 62/?, cuts processed until now is 8571
270
+ 2022-11-19 13:15:59,300 INFO [decode.py:456] batch 64/?, cuts processed until now is 9115
271
+ 2022-11-19 13:16:06,072 INFO [decode.py:456] batch 66/?, cuts processed until now is 9395
272
+ 2022-11-19 13:16:11,136 INFO [decode.py:456] batch 68/?, cuts processed until now is 9904
273
+ 2022-11-19 13:16:16,150 INFO [decode.py:456] batch 70/?, cuts processed until now is 10413
274
+ 2022-11-19 13:16:19,999 INFO [decode.py:456] batch 72/?, cuts processed until now is 11190
275
+ 2022-11-19 13:16:23,041 INFO [decode.py:456] batch 74/?, cuts processed until now is 11589
276
+ 2022-11-19 13:16:25,768 INFO [decode.py:456] batch 76/?, cuts processed until now is 11699
277
+ 2022-11-19 13:16:29,457 INFO [decode.py:456] batch 78/?, cuts processed until now is 11799
278
+ 2022-11-19 13:16:32,477 INFO [decode.py:456] batch 80/?, cuts processed until now is 11889
279
+ 2022-11-19 13:16:35,456 INFO [decode.py:456] batch 82/?, cuts processed until now is 12461
280
+ 2022-11-19 13:16:39,336 INFO [decode.py:456] batch 84/?, cuts processed until now is 12568
281
+ 2022-11-19 13:16:44,150 INFO [decode.py:456] batch 86/?, cuts processed until now is 12601
282
+ 2022-11-19 13:16:44,321 INFO [decode.py:472] The transcripts are stored in pruned_transducer_stateless7/exp/v2/modified_beam_search/recogs-dev_gss-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt
283
+ 2022-11-19 13:16:44,571 INFO [utils.py:531] [dev_gss-beam_size_4] %WER 22.08% [20967 / 94940, 2979 ins, 4055 del, 13933 sub ]
284
+ 2022-11-19 13:16:45,272 INFO [utils.py:531] [dev_gss-beam_size_4] %WER 14.33% [53002 / 369873, 12785 ins, 18039 del, 22178 sub ]
285
+ 2022-11-19 13:16:46,164 INFO [decode.py:498] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/modified_beam_search/wers-dev_gss-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt
286
+ 2022-11-19 13:16:46,166 INFO [decode.py:518]
287
+ For dev_gss, WER/CER of different settings are:
288
+ beam_size_4 22.08 14.33 best for dev_gss
289
+
290
+ 2022-11-19 13:16:46,170 INFO [decode.py:664] Decoding test_gss
291
+ 2022-11-19 13:16:51,050 INFO [decode.py:456] batch 0/?, cuts processed until now is 69
292
+ 2022-11-19 13:16:56,710 INFO [decode.py:456] batch 2/?, cuts processed until now is 555
293
+ 2022-11-19 13:17:03,943 INFO [decode.py:456] batch 4/?, cuts processed until now is 703
294
+ 2022-11-19 13:17:11,131 INFO [decode.py:456] batch 6/?, cuts processed until now is 831
295
+ 2022-11-19 13:17:18,481 INFO [decode.py:456] batch 8/?, cuts processed until now is 988
296
+ 2022-11-19 13:17:25,419 INFO [decode.py:456] batch 10/?, cuts processed until now is 1096
297
+ 2022-11-19 13:17:32,572 INFO [decode.py:456] batch 12/?, cuts processed until now is 1268
298
+ 2022-11-19 13:17:39,300 INFO [decode.py:456] batch 14/?, cuts processed until now is 1533
299
+ 2022-11-19 13:17:45,583 INFO [decode.py:456] batch 16/?, cuts processed until now is 1932
300
+ 2022-11-19 13:17:52,738 INFO [decode.py:456] batch 18/?, cuts processed until now is 2057
301
+ 2022-11-19 13:17:59,753 INFO [decode.py:456] batch 20/?, cuts processed until now is 2126
302
+ 2022-11-19 13:18:06,723 INFO [decode.py:456] batch 22/?, cuts processed until now is 2390
303
+ 2022-11-19 13:18:12,727 INFO [decode.py:456] batch 24/?, cuts processed until now is 2858
304
+ 2022-11-19 13:18:19,958 INFO [decode.py:456] batch 26/?, cuts processed until now is 2998
305
+ 2022-11-19 13:18:27,012 INFO [decode.py:456] batch 28/?, cuts processed until now is 3280
306
+ 2022-11-19 13:18:34,136 INFO [decode.py:456] batch 30/?, cuts processed until now is 3432
307
+ 2022-11-19 13:18:41,244 INFO [decode.py:456] batch 32/?, cuts processed until now is 3537
308
+ 2022-11-19 13:18:48,017 INFO [decode.py:456] batch 34/?, cuts processed until now is 3709
309
+ 2022-11-19 13:18:55,321 INFO [decode.py:456] batch 36/?, cuts processed until now is 3825
310
+ 2022-11-19 13:19:02,492 INFO [decode.py:456] batch 38/?, cuts processed until now is 3972
311
+ 2022-11-19 13:19:07,193 INFO [decode.py:456] batch 40/?, cuts processed until now is 4410
312
+ 2022-11-19 13:19:12,039 INFO [decode.py:456] batch 42/?, cuts processed until now is 5060
313
+ 2022-11-19 13:19:12,188 INFO [zipformer.py:1411] attn_weights_entropy = tensor([4.4848, 4.6789, 4.1788, 4.6840, 4.6719, 3.8186, 4.3463, 4.1185],
314
+ device='cuda:0'), covar=tensor([0.0147, 0.0285, 0.1255, 0.0281, 0.0348, 0.0404, 0.0395, 0.0541],
315
+ device='cuda:0'), in_proj_covar=tensor([0.0119, 0.0165, 0.0262, 0.0160, 0.0205, 0.0160, 0.0175, 0.0162],
316
+ device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002, 0.0002, 0.0002],
317
+ device='cuda:0')
318
+ 2022-11-19 13:19:17,475 INFO [decode.py:456] batch 44/?, cuts processed until now is 5546
319
+ 2022-11-19 13:19:24,517 INFO [decode.py:456] batch 46/?, cuts processed until now is 5687
320
+ 2022-11-19 13:19:31,445 INFO [decode.py:456] batch 48/?, cuts processed until now is 5893
321
+ 2022-11-19 13:19:36,884 INFO [decode.py:456] batch 50/?, cuts processed until now is 6379
322
+ 2022-11-19 13:19:43,164 INFO [decode.py:456] batch 52/?, cuts processed until now is 6713
323
+ 2022-11-19 13:19:49,495 INFO [decode.py:456] batch 54/?, cuts processed until now is 7112
324
+ 2022-11-19 13:19:56,399 INFO [decode.py:456] batch 56/?, cuts processed until now is 7298
325
+ 2022-11-19 13:20:00,169 INFO [decode.py:456] batch 58/?, cuts processed until now is 8130
326
+ 2022-11-19 13:20:07,014 INFO [decode.py:456] batch 60/?, cuts processed until now is 8273
327
+ 2022-11-19 13:20:12,206 INFO [decode.py:456] batch 62/?, cuts processed until now is 8813
328
+ 2022-11-19 13:20:17,331 INFO [decode.py:456] batch 64/?, cuts processed until now is 9353
329
+ 2022-11-19 13:20:23,907 INFO [decode.py:456] batch 66/?, cuts processed until now is 9500
330
+ 2022-11-19 13:20:29,027 INFO [decode.py:456] batch 68/?, cuts processed until now is 9944
331
+ 2022-11-19 13:20:35,463 INFO [decode.py:456] batch 70/?, cuts processed until now is 10274
332
+ 2022-11-19 13:20:39,780 INFO [decode.py:456] batch 72/?, cuts processed until now is 10711
333
+ 2022-11-19 13:20:45,008 INFO [decode.py:456] batch 74/?, cuts processed until now is 10820
334
+ 2022-11-19 13:20:48,931 INFO [decode.py:456] batch 76/?, cuts processed until now is 11076
335
+ 2022-11-19 13:20:51,278 INFO [decode.py:456] batch 78/?, cuts processed until now is 11209
336
+ 2022-11-19 13:20:53,853 INFO [decode.py:456] batch 80/?, cuts processed until now is 11651
337
+ 2022-11-19 13:20:58,067 INFO [decode.py:456] batch 82/?, cuts processed until now is 12070
338
+ 2022-11-19 13:21:01,142 INFO [decode.py:472] The transcripts are stored in pruned_transducer_stateless7/exp/v2/modified_beam_search/recogs-test_gss-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt
339
+ 2022-11-19 13:21:01,286 INFO [utils.py:531] [test_gss-beam_size_4] %WER 23.03% [20652 / 89659, 2423 ins, 4743 del, 13486 sub ]
340
+ 2022-11-19 13:21:01,980 INFO [utils.py:531] [test_gss-beam_size_4] %WER 15.22% [53902 / 354205, 11533 ins, 20212 del, 22157 sub ]
341
+ 2022-11-19 13:21:02,980 INFO [decode.py:498] Wrote detailed error stats to pruned_transducer_stateless7/exp/v2/modified_beam_search/wers-test_gss-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt
342
+ 2022-11-19 13:21:02,993 INFO [decode.py:518]
343
+ For test_gss, WER/CER of different settings are:
344
+ beam_size_4 23.03 15.22 best for test_gss
345
+
346
+ 2022-11-19 13:21:02,997 INFO [decode.py:681] Done!
log/modified_beam_search/recogs-dev_gss-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/modified_beam_search/recogs-dev_ihm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/modified_beam_search/recogs-dev_sdm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/modified_beam_search/recogs-test_gss-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/modified_beam_search/recogs-test_ihm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/modified_beam_search/recogs-test_sdm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt ADDED
The diff for this file is too large to render. See raw diff
 
log/modified_beam_search/wer-summary-dev_gss-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER CER
2
+ beam_size_4 22.08 14.33
log/modified_beam_search/wer-summary-dev_ihm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER CER
2
+ beam_size_4 19.23 12.01
log/modified_beam_search/wer-summary-dev_sdm-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER CER
2
+ beam_size_4 31.16 22.34
log/modified_beam_search/wer-summary-test_gss-beam_size_4-epoch-99-avg-1-modified_beam_search-beam-size-4.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER CER
2
+ beam_size_4 23.03 15.22