icefall-asr-alimeeting-pruned-transducer-stateless7 / log /greedy_search /log-decode-epoch-15-avg-8-context-2-max-sym-per-frame-1-2022-12-08-23-52-09
desh2608's picture
add pretrained model
26cafdd
raw
history blame
No virus
15.1 kB
2022-12-08 23:52:09,512 INFO [decode.py:551] Decoding started
2022-12-08 23:52:09,513 INFO [decode.py:557] Device: cuda:0
2022-12-08 23:52:09,579 INFO [lexicon.py:168] Loading pre-compiled data/lang_char/Linv.pt
2022-12-08 23:52:09,589 INFO [decode.py:563] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 100, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'b2ce63f3940018e7b433c43fd802fc50ab006a76', 'k2-git-date': 'Wed Nov 23 08:43:43 2022', 'lhotse-version': '1.9.0.dev+git.97bf4b0.dirty', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'ali_meeting', 'icefall-git-sha1': 'f13cf61-dirty', 'icefall-git-date': 'Tue Dec 6 03:34:27 2022', 'icefall-path': '/exp/draj/mini_scale_2022/icefall', 'k2-path': '/exp/draj/mini_scale_2022/k2/k2/python/k2/__init__.py', 'lhotse-path': '/exp/draj/mini_scale_2022/lhotse/lhotse/__init__.py', 'hostname': 'r2n06', 'IP address': '10.1.2.6'}, 'epoch': 15, 'iter': 0, 'avg': 8, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7/exp/v1'), 'lang_dir': 'data/lang_char', 'decoding_method': 'greedy_search', 'beam_size': 4, 'beam': 4, 'ngram_lm_scale': 0.01, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'manifest_dir': PosixPath('data/manifests'), 'enable_musan': True, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'max_duration': 500, 'max_cuts': None, 'num_buckets': 50, 'on_the_fly_feats': False, 'shuffle': True, 'num_workers': 8, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'res_dir': PosixPath('pruned_transducer_stateless7/exp/v1/greedy_search'), 'suffix': 'epoch-15-avg-8-context-2-max-sym-per-frame-1', 'blank_id': 0, 'vocab_size': 3290}
2022-12-08 23:52:09,589 INFO [decode.py:565] About to create model
2022-12-08 23:52:10,047 INFO [zipformer.py:179] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8.
2022-12-08 23:52:10,093 INFO [decode.py:632] Calculating the averaged model over epoch range from 7 (excluded) to 15
2022-12-08 23:52:26,211 INFO [decode.py:655] Number of model parameters: 75734561
2022-12-08 23:52:26,212 INFO [asr_datamodule.py:381] About to get AliMeeting IHM eval cuts
2022-12-08 23:52:26,214 INFO [asr_datamodule.py:402] About to get AliMeeting IHM test cuts
2022-12-08 23:52:26,216 INFO [asr_datamodule.py:387] About to get AliMeeting SDM eval cuts
2022-12-08 23:52:26,217 INFO [asr_datamodule.py:408] About to get AliMeeting SDM test cuts
2022-12-08 23:52:26,219 INFO [asr_datamodule.py:396] About to get AliMeeting GSS-enhanced eval cuts
2022-12-08 23:52:26,221 INFO [asr_datamodule.py:417] About to get AliMeeting GSS-enhanced test cuts
2022-12-08 23:52:27,975 INFO [decode.py:687] Decoding eval_ihm
2022-12-08 23:52:29,438 INFO [decode.py:463] batch 0/?, cuts processed until now is 58
2022-12-08 23:52:52,862 INFO [decode.py:479] The transcripts are stored in pruned_transducer_stateless7/exp/v1/greedy_search/recogs-eval_ihm-greedy_search-epoch-15-avg-8-context-2-max-sym-per-frame-1.txt
2022-12-08 23:52:52,958 INFO [utils.py:536] [eval_ihm-greedy_search] %WER 10.13% [8216 / 81111, 831 ins, 2185 del, 5200 sub ]
2022-12-08 23:52:53,196 INFO [decode.py:492] Wrote detailed error stats to pruned_transducer_stateless7/exp/v1/greedy_search/errs-eval_ihm-greedy_search-epoch-15-avg-8-context-2-max-sym-per-frame-1.txt
2022-12-08 23:52:53,197 INFO [decode.py:508]
For eval_ihm, WER of different settings are:
greedy_search 10.13 best for eval_ihm
2022-12-08 23:52:53,197 INFO [decode.py:687] Decoding test_ihm
2022-12-08 23:52:54,874 INFO [decode.py:463] batch 0/?, cuts processed until now is 49
2022-12-08 23:53:30,263 INFO [zipformer.py:1414] attn_weights_entropy = tensor([4.5696, 4.6696, 4.8707, 4.0575, 4.6745, 4.9781, 2.0707, 4.4098],
device='cuda:0'), covar=tensor([0.0117, 0.0196, 0.0221, 0.0396, 0.0183, 0.0096, 0.3094, 0.0230],
device='cuda:0'), in_proj_covar=tensor([0.0143, 0.0153, 0.0125, 0.0123, 0.0184, 0.0118, 0.0149, 0.0172],
device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003],
device='cuda:0')
2022-12-08 23:53:49,853 INFO [zipformer.py:1414] attn_weights_entropy = tensor([2.0649, 1.6505, 4.0800, 3.8988, 3.9109, 4.0868, 3.2733, 4.1840],
device='cuda:0'), covar=tensor([0.1251, 0.1291, 0.0081, 0.0142, 0.0145, 0.0086, 0.0117, 0.0077],
device='cuda:0'), in_proj_covar=tensor([0.0139, 0.0150, 0.0113, 0.0156, 0.0131, 0.0125, 0.0105, 0.0106],
device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002],
device='cuda:0')
2022-12-08 23:53:50,687 INFO [decode.py:463] batch 100/?, cuts processed until now is 13420
2022-12-08 23:53:55,847 INFO [zipformer.py:1414] attn_weights_entropy = tensor([4.9753, 2.8328, 4.9959, 2.8165, 4.7903, 2.2568, 3.6416, 4.6590],
device='cuda:0'), covar=tensor([0.0528, 0.5202, 0.0432, 1.3318, 0.0421, 0.5119, 0.1614, 0.0291],
device='cuda:0'), in_proj_covar=tensor([0.0221, 0.0205, 0.0174, 0.0283, 0.0196, 0.0207, 0.0197, 0.0178],
device='cuda:0'), out_proj_covar=tensor([0.0004, 0.0004, 0.0003, 0.0005, 0.0004, 0.0004, 0.0004, 0.0004],
device='cuda:0')
2022-12-08 23:53:58,036 INFO [decode.py:479] The transcripts are stored in pruned_transducer_stateless7/exp/v1/greedy_search/recogs-test_ihm-greedy_search-epoch-15-avg-8-context-2-max-sym-per-frame-1.txt
2022-12-08 23:53:58,352 INFO [utils.py:536] [test_ihm-greedy_search] %WER 12.21% [25615 / 209845, 2007 ins, 7895 del, 15713 sub ]
2022-12-08 23:53:58,963 INFO [decode.py:492] Wrote detailed error stats to pruned_transducer_stateless7/exp/v1/greedy_search/errs-test_ihm-greedy_search-epoch-15-avg-8-context-2-max-sym-per-frame-1.txt
2022-12-08 23:53:58,964 INFO [decode.py:508]
For test_ihm, WER of different settings are:
greedy_search 12.21 best for test_ihm
2022-12-08 23:53:58,964 INFO [decode.py:687] Decoding eval_sdm
2022-12-08 23:54:00,431 INFO [decode.py:463] batch 0/?, cuts processed until now is 58
2022-12-08 23:54:09,011 INFO [zipformer.py:1414] attn_weights_entropy = tensor([2.9497, 2.8796, 3.7148, 2.5176, 2.4075, 2.9990, 1.6958, 2.9434],
device='cuda:0'), covar=tensor([0.1004, 0.0976, 0.0427, 0.2647, 0.2157, 0.0924, 0.3887, 0.0774],
device='cuda:0'), in_proj_covar=tensor([0.0070, 0.0085, 0.0077, 0.0085, 0.0104, 0.0072, 0.0116, 0.0077],
device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003],
device='cuda:0')
2022-12-08 23:54:21,106 INFO [zipformer.py:1414] attn_weights_entropy = tensor([4.0325, 3.8665, 3.9421, 3.9902, 3.6039, 3.2852, 4.1056, 3.8995],
device='cuda:0'), covar=tensor([0.0388, 0.0286, 0.0419, 0.0433, 0.0408, 0.0515, 0.0381, 0.0510],
device='cuda:0'), in_proj_covar=tensor([0.0122, 0.0118, 0.0125, 0.0134, 0.0129, 0.0102, 0.0145, 0.0125],
device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002],
device='cuda:0')
2022-12-08 23:54:23,976 INFO [decode.py:479] The transcripts are stored in pruned_transducer_stateless7/exp/v1/greedy_search/recogs-eval_sdm-greedy_search-epoch-15-avg-8-context-2-max-sym-per-frame-1.txt
2022-12-08 23:54:24,077 INFO [utils.py:536] [eval_sdm-greedy_search] %WER 23.70% [19222 / 81111, 1683 ins, 6073 del, 11466 sub ]
2022-12-08 23:54:24,332 INFO [decode.py:492] Wrote detailed error stats to pruned_transducer_stateless7/exp/v1/greedy_search/errs-eval_sdm-greedy_search-epoch-15-avg-8-context-2-max-sym-per-frame-1.txt
2022-12-08 23:54:24,333 INFO [decode.py:508]
For eval_sdm, WER of different settings are:
greedy_search 23.7 best for eval_sdm
2022-12-08 23:54:24,333 INFO [decode.py:687] Decoding test_sdm
2022-12-08 23:54:26,054 INFO [decode.py:463] batch 0/?, cuts processed until now is 49
2022-12-08 23:54:27,800 INFO [zipformer.py:1414] attn_weights_entropy = tensor([5.8932, 5.8649, 5.7222, 5.8181, 5.3845, 5.4143, 5.9574, 5.6303],
device='cuda:0'), covar=tensor([0.0422, 0.0200, 0.0364, 0.0455, 0.0434, 0.0144, 0.0327, 0.0613],
device='cuda:0'), in_proj_covar=tensor([0.0122, 0.0118, 0.0125, 0.0134, 0.0129, 0.0102, 0.0145, 0.0125],
device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002],
device='cuda:0')
2022-12-08 23:54:31,870 INFO [zipformer.py:1414] attn_weights_entropy = tensor([5.0367, 4.5032, 4.6101, 4.9917, 4.5509, 4.6262, 4.8931, 4.4161],
device='cuda:0'), covar=tensor([0.0253, 0.1510, 0.0270, 0.0302, 0.0832, 0.0292, 0.0559, 0.0433],
device='cuda:0'), in_proj_covar=tensor([0.0149, 0.0248, 0.0167, 0.0163, 0.0160, 0.0127, 0.0252, 0.0145],
device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002, 0.0003, 0.0002],
device='cuda:0')
2022-12-08 23:54:36,104 INFO [zipformer.py:1414] attn_weights_entropy = tensor([3.5049, 3.0396, 3.1109, 2.3003, 2.9128, 3.3196, 3.3253, 2.8653],
device='cuda:0'), covar=tensor([0.0876, 0.2228, 0.1388, 0.2084, 0.1478, 0.0834, 0.1264, 0.1702],
device='cuda:0'), in_proj_covar=tensor([0.0124, 0.0170, 0.0124, 0.0117, 0.0121, 0.0128, 0.0106, 0.0128],
device='cuda:0'), out_proj_covar=tensor([0.0005, 0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0005],
device='cuda:0')
2022-12-08 23:54:47,951 INFO [zipformer.py:1414] attn_weights_entropy = tensor([2.9691, 2.6751, 2.9312, 3.1152, 2.8372, 2.3588, 2.9794, 3.0298],
device='cuda:0'), covar=tensor([0.0107, 0.0177, 0.0195, 0.0116, 0.0141, 0.0331, 0.0139, 0.0184],
device='cuda:0'), in_proj_covar=tensor([0.0258, 0.0233, 0.0347, 0.0293, 0.0235, 0.0280, 0.0265, 0.0260],
device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0003, 0.0003, 0.0002, 0.0003, 0.0003, 0.0002],
device='cuda:0')
2022-12-08 23:55:21,831 INFO [decode.py:463] batch 100/?, cuts processed until now is 13420
2022-12-08 23:55:29,332 INFO [decode.py:479] The transcripts are stored in pruned_transducer_stateless7/exp/v1/greedy_search/recogs-test_sdm-greedy_search-epoch-15-avg-8-context-2-max-sym-per-frame-1.txt
2022-12-08 23:55:29,621 INFO [utils.py:536] [test_sdm-greedy_search] %WER 26.41% [55414 / 209845, 4503 ins, 19379 del, 31532 sub ]
2022-12-08 23:55:30,282 INFO [decode.py:492] Wrote detailed error stats to pruned_transducer_stateless7/exp/v1/greedy_search/errs-test_sdm-greedy_search-epoch-15-avg-8-context-2-max-sym-per-frame-1.txt
2022-12-08 23:55:30,283 INFO [decode.py:508]
For test_sdm, WER of different settings are:
greedy_search 26.41 best for test_sdm
2022-12-08 23:55:30,283 INFO [decode.py:687] Decoding eval_gss
2022-12-08 23:55:31,773 INFO [decode.py:463] batch 0/?, cuts processed until now is 58
2022-12-08 23:55:34,158 INFO [zipformer.py:1414] attn_weights_entropy = tensor([3.2471, 3.7230, 3.3454, 2.8730, 2.7644, 3.7741, 3.4475, 1.8443],
device='cuda:0'), covar=tensor([0.3240, 0.0695, 0.1965, 0.1750, 0.1144, 0.0470, 0.1355, 0.3201],
device='cuda:0'), in_proj_covar=tensor([0.0138, 0.0066, 0.0052, 0.0054, 0.0082, 0.0064, 0.0085, 0.0091],
device='cuda:0'), out_proj_covar=tensor([0.0007, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005],
device='cuda:0')
2022-12-08 23:55:37,669 INFO [zipformer.py:1414] attn_weights_entropy = tensor([3.7584, 3.8680, 4.3876, 3.2812, 2.7117, 3.5261, 2.1305, 3.5684],
device='cuda:0'), covar=tensor([0.0716, 0.0548, 0.0428, 0.2018, 0.2384, 0.0745, 0.3972, 0.0912],
device='cuda:0'), in_proj_covar=tensor([0.0070, 0.0085, 0.0077, 0.0085, 0.0104, 0.0072, 0.0116, 0.0077],
device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003],
device='cuda:0')
2022-12-08 23:55:41,813 INFO [zipformer.py:1414] attn_weights_entropy = tensor([4.2555, 3.7707, 3.0407, 4.4182, 4.2751, 4.2557, 3.6943, 2.9354],
device='cuda:0'), covar=tensor([0.0750, 0.1279, 0.4109, 0.0665, 0.0696, 0.1311, 0.1328, 0.4549],
device='cuda:0'), in_proj_covar=tensor([0.0237, 0.0272, 0.0252, 0.0220, 0.0279, 0.0268, 0.0232, 0.0237],
device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003],
device='cuda:0')
2022-12-08 23:55:55,409 INFO [decode.py:479] The transcripts are stored in pruned_transducer_stateless7/exp/v1/greedy_search/recogs-eval_gss-greedy_search-epoch-15-avg-8-context-2-max-sym-per-frame-1.txt
2022-12-08 23:55:55,506 INFO [utils.py:536] [eval_gss-greedy_search] %WER 12.24% [9930 / 81111, 915 ins, 2606 del, 6409 sub ]
2022-12-08 23:55:55,743 INFO [decode.py:492] Wrote detailed error stats to pruned_transducer_stateless7/exp/v1/greedy_search/errs-eval_gss-greedy_search-epoch-15-avg-8-context-2-max-sym-per-frame-1.txt
2022-12-08 23:55:55,744 INFO [decode.py:508]
For eval_gss, WER of different settings are:
greedy_search 12.24 best for eval_gss
2022-12-08 23:55:55,744 INFO [decode.py:687] Decoding test_gss
2022-12-08 23:55:57,430 INFO [decode.py:463] batch 0/?, cuts processed until now is 49
2022-12-08 23:56:44,408 INFO [zipformer.py:1414] attn_weights_entropy = tensor([2.0077, 1.3975, 3.4059, 2.9515, 3.0863, 3.3663, 2.8472, 3.3858],
device='cuda:0'), covar=tensor([0.0407, 0.0664, 0.0061, 0.0240, 0.0239, 0.0078, 0.0193, 0.0094],
device='cuda:0'), in_proj_covar=tensor([0.0139, 0.0150, 0.0113, 0.0156, 0.0131, 0.0125, 0.0105, 0.0106],
device='cuda:0'), out_proj_covar=tensor([0.0003, 0.0004, 0.0003, 0.0003, 0.0003, 0.0003, 0.0003, 0.0002],
device='cuda:0')
2022-12-08 23:56:53,454 INFO [decode.py:463] batch 100/?, cuts processed until now is 13420
2022-12-08 23:57:00,993 INFO [decode.py:479] The transcripts are stored in pruned_transducer_stateless7/exp/v1/greedy_search/recogs-test_gss-greedy_search-epoch-15-avg-8-context-2-max-sym-per-frame-1.txt
2022-12-08 23:57:01,279 INFO [utils.py:536] [test_gss-greedy_search] %WER 14.99% [31450 / 209845, 2293 ins, 9720 del, 19437 sub ]
2022-12-08 23:57:01,910 INFO [decode.py:492] Wrote detailed error stats to pruned_transducer_stateless7/exp/v1/greedy_search/errs-test_gss-greedy_search-epoch-15-avg-8-context-2-max-sym-per-frame-1.txt
2022-12-08 23:57:01,911 INFO [decode.py:508]
For test_gss, WER of different settings are:
greedy_search 14.99 best for test_gss
2022-12-08 23:57:01,912 INFO [decode.py:703] Done!