2024-03-15 19:43:39,785 INFO [train_char.py:832] (0/2) Training started 2024-03-15 19:43:39,793 INFO [train_char.py:842] (0/2) Device: cuda:0 2024-03-15 19:43:39,820 INFO [lexicon.py:168] (0/2) Loading pre-compiled data/zh-HK/lang_char/Linv.pt 2024-03-15 19:43:39,828 INFO [train_char.py:856] (0/2) {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.24.4', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '2989b0b1186fa6022932804f5b39fbb2781ebf42', 'k2-git-date': 'Fri Nov 24 11:34:10 2023', 'lhotse-version': '1.22.0.dev+git.d8ed1bbb.dirty', 'torch-version': '1.11.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.9', 'icefall-git-branch': 'dev/cv-zipformer', 'icefall-git-sha1': '6993183d-clean', 'icefall-git-date': 'Fri Mar 15 19:31:35 2024', 'icefall-path': '/star-home/jinzengrui/lib/miniconda3/envs/dev39/lib/python3.9/site-packages/icefall-1.0-py3.9.egg', 'k2-path': '/star-home/jinzengrui/lib/miniconda3/envs/dev39/lib/python3.9/site-packages/k2-1.24.4.dev20231207+cuda10.2.torch1.11.0-py3.9-linux-x86_64.egg/k2/__init__.py', 'lhotse-path': '/star-home/jinzengrui/lib/miniconda3/envs/dev39/lib/python3.9/site-packages/lhotse-1.22.0.dev0+git.d8ed1bbb.dirty-py3.9.egg/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-2-1207150844-f49d8c4f4-c49d5', 'IP address': '10.177.22.19'}, 'world_size': 2, 'master_port': 12354, 'tensorboard': True, 'num_epochs': 50, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('zipformer/exp_val'), 'lang_dir': 'data/zh-HK/lang_char', 'use_validated_set': True, 'use_invalidated_set': False, 'base_lr': 0.045, 'lr_batches': 7500, 'lr_epochs': 3.5, 'ref_duration': 600, 'context_size': 1, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'ctc_loss_scale': 0.2, 'seed': 42, 'print_diagnostics': False, 'inf_check': False, 'save_every_n': 4000, 'keep_last_k': 30, 'average_period': 200, 'use_fp16': True, 'num_encoder_layers': '2,2,3,4,3,2', 'downsampling_factor': '1,2,4,8,4,2', 'feedforward_dim': '512,768,1024,1536,1024,768', 'num_heads': '4,4,4,8,4,4', 'encoder_dim': '192,256,384,512,384,256', 'query_head_dim': '32', 'value_head_dim': '12', 'pos_head_dim': '4', 'pos_dim': 48, 'encoder_unmasked_dim': '192,192,256,256,256,192', 'cnn_module_kernel': '31,31,15,15,15,31', 'decoder_dim': 512, 'joiner_dim': 512, 'causal': False, 'chunk_size': '16,32,64,-1', 'left_context_frames': '64,128,256,-1', 'use_transducer': True, 'use_ctc': False, 'language': 'zh-HK', 'cv_manifest_dir': PosixPath('data/zh-HK/fbank'), 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 1000, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'blank_id': 0, 'vocab_size': 3904} 2024-03-15 19:43:39,828 INFO [train_char.py:858] (0/2) About to create model 2024-03-15 19:43:40,447 INFO [train_char.py:862] (0/2) Number of model parameters: 72526519 2024-03-15 19:43:45,508 INFO [train_char.py:877] (0/2) Using DDP 2024-03-15 19:43:45,791 INFO [asr_datamodule.py:414] (0/2) About to get validated cuts (with dev/test removed) 2024-03-15 19:43:45,812 INFO [asr_datamodule.py:229] (0/2) Enable MUSAN 2024-03-15 19:43:45,812 INFO [asr_datamodule.py:230] (0/2) About to get Musan cuts 2024-03-15 19:43:48,166 INFO [asr_datamodule.py:254] (0/2) Enable SpecAugment 2024-03-15 19:43:48,166 INFO [asr_datamodule.py:255] (0/2) Time warp factor: 80 2024-03-15 19:43:48,166 INFO [asr_datamodule.py:265] (0/2) Num frame mask: 10 2024-03-15 19:43:48,166 INFO [asr_datamodule.py:278] (0/2) About to create train dataset 2024-03-15 19:43:48,167 INFO [asr_datamodule.py:305] (0/2) Using DynamicBucketingSampler. 2024-03-15 19:43:48,997 INFO [asr_datamodule.py:322] (0/2) About to create train dataloader 2024-03-15 19:43:48,997 INFO [asr_datamodule.py:430] (0/2) About to get dev cuts 2024-03-15 19:43:48,999 INFO [asr_datamodule.py:353] (0/2) About to create dev dataset 2024-03-15 19:43:49,368 INFO [asr_datamodule.py:370] (0/2) About to create dev dataloader 2024-03-15 19:43:49,369 INFO [train_char.py:779] (0/2) Sanity check -- see if any of the batches in epoch 1 would cause OOM. 2024-03-15 19:45:08,628 INFO [train_char.py:807] (0/2) Maximum memory allocated so far is 22690MB 2024-03-15 19:45:10,446 INFO [train_char.py:807] (0/2) Maximum memory allocated so far is 22878MB 2024-03-15 19:45:12,753 INFO [train_char.py:807] (0/2) Maximum memory allocated so far is 22878MB 2024-03-15 19:45:14,881 INFO [train_char.py:807] (0/2) Maximum memory allocated so far is 22878MB 2024-03-15 19:45:17,492 INFO [train_char.py:807] (0/2) Maximum memory allocated so far is 22878MB 2024-03-15 19:45:20,151 INFO [train_char.py:807] (0/2) Maximum memory allocated so far is 22878MB 2024-03-15 19:46:04,276 INFO [train_char.py:689] (0/2) Epoch 1, batch 0, loss[loss=9.642, simple_loss=8.759, pruned_loss=8.808, over 24278.00 frames. ], tot_loss[loss=9.642, simple_loss=8.759, pruned_loss=8.808, over 24278.00 frames. ], batch size: 116, lr: 2.25e-02, grad_scale: 1.0 2024-03-15 19:46:04,277 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 19:46:17,626 INFO [train_char.py:721] (0/2) Epoch 1, validation: loss=9.614, simple_loss=8.743, pruned_loss=8.698, over 657665.00 frames. 2024-03-15 19:46:17,627 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 23565MB 2024-03-15 19:46:22,266 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=0.0, ans=0.5 2024-03-15 19:46:31,464 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=384, metric=9.99 vs. limit=7.5125 2024-03-15 19:46:32,471 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.balancer1.prob, batch_count=33.333333333333336, ans=0.4984375 2024-03-15 19:46:36,387 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.043e+03 5.585e+03 5.856e+03 6.424e+03 6.791e+03, threshold=2.342e+04, percent-clipped=0.0 2024-03-15 19:46:51,381 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 1.692e+03 4.037e+03 5.585e+03 6.394e+03 6.867e+03, threshold=2.234e+04, percent-clipped=0.0 2024-03-15 19:46:55,164 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.whiten, num_groups=1, num_channels=512, metric=161.39 vs. limit=4.026666666666666 2024-03-15 19:46:56,478 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=256, metric=222.38 vs. limit=7.525 2024-03-15 19:47:03,778 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=256, metric=223.09 vs. limit=7.525 2024-03-15 19:47:18,862 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=377.50 vs. limit=5.05 2024-03-15 19:47:24,911 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.17 vs. limit=3.02 2024-03-15 19:47:27,252 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.372e+02 2.141e+03 3.502e+03 6.193e+03 2.448e+04, threshold=1.401e+04, percent-clipped=2.5 2024-03-15 19:47:30,608 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.self_attn_weights.pos_emb_skip_rate, batch_count=133.33333333333334, ans=0.48333333333333334 2024-03-15 19:47:37,854 INFO [train_char.py:689] (0/2) Epoch 1, batch 50, loss[loss=0.9447, simple_loss=0.8449, pruned_loss=0.8996, over 24139.00 frames. ], tot_loss[loss=3.352, simple_loss=3.098, pruned_loss=2.501, over 1080800.09 frames. ], batch size: 188, lr: 2.48e-02, grad_scale: 0.25 2024-03-15 19:47:45,069 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=223.81 vs. limit=7.5625 2024-03-15 19:47:50,346 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=166.66666666666666, ans=0.19375 2024-03-15 19:47:53,444 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.ff3_skip_rate, batch_count=200.0, ans=0.0955 2024-03-15 19:48:03,386 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=384, metric=50.01 vs. limit=7.65 2024-03-15 19:48:13,069 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass_mid.scale_min, batch_count=233.33333333333334, ans=0.8918333333333334 2024-03-15 19:48:15,073 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=512, metric=385.97 vs. limit=7.675 2024-03-15 19:48:17,969 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=151.00 vs. limit=7.5875 2024-03-15 19:48:27,955 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer2.prob, batch_count=266.6666666666667, ans=0.4875 2024-03-15 19:48:30,863 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module2.balancer2.prob, batch_count=266.6666666666667, ans=0.4875 2024-03-15 19:48:36,909 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.whiten, num_groups=1, num_channels=256, metric=20.64 vs. limit=4.12 2024-03-15 19:48:44,419 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=384, metric=71.93 vs. limit=7.6125 2024-03-15 19:48:45,449 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.ff3_skip_rate, batch_count=300.0, ans=0.09325 2024-03-15 19:48:52,519 INFO [train_char.py:689] (0/2) Epoch 1, batch 100, loss[loss=0.6629, simple_loss=0.5632, pruned_loss=0.7827, over 24264.00 frames. ], tot_loss[loss=1.906, simple_loss=1.746, pruned_loss=1.512, over 1911753.62 frames. ], batch size: 116, lr: 2.70e-02, grad_scale: 0.5 2024-03-15 19:48:54,896 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=254.03 vs. limit=7.625 2024-03-15 19:48:56,896 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 3.613e+01 5.979e+01 1.394e+02 2.944e+03 2.448e+04, threshold=2.787e+02, percent-clipped=0.0 2024-03-15 19:49:06,484 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=176.85 vs. limit=7.6375 2024-03-15 19:49:09,733 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.self_attn2.whiten.whitening_limit, batch_count=366.6666666666667, ans=7.775 2024-03-15 19:49:12,556 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=9.07 vs. limit=5.091666666666667 2024-03-15 19:49:22,718 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=384, metric=70.96 vs. limit=7.65 2024-03-15 19:49:25,637 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.self_attn2.whiten, num_groups=1, num_channels=512, metric=172.38 vs. limit=7.8 2024-03-15 19:49:28,916 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=35.57 vs. limit=7.65 2024-03-15 19:49:43,355 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=384, metric=52.48 vs. limit=7.825 2024-03-15 19:49:50,851 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=155.69 vs. limit=5.216666666666667 2024-03-15 19:50:01,478 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=330.00 vs. limit=7.675 2024-03-15 19:50:02,479 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=466.6666666666667, ans=0.478125 2024-03-15 19:50:05,488 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module2.balancer2.min_positive, batch_count=466.6666666666667, ans=0.09708333333333334 2024-03-15 19:50:13,887 INFO [train_char.py:689] (0/2) Epoch 1, batch 150, loss[loss=0.5592, simple_loss=0.4615, pruned_loss=0.6854, over 24311.00 frames. ], tot_loss[loss=1.381, simple_loss=1.249, pruned_loss=1.179, over 2553251.38 frames. ], batch size: 116, lr: 2.93e-02, grad_scale: 0.5 2024-03-15 19:50:15,531 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff3_skip_rate, batch_count=500.0, ans=0.08875000000000001 2024-03-15 19:50:21,033 INFO [scaling.py:1023] (0/2) Whitening: name=encoder_embed.out_whiten, num_groups=1, num_channels=192, metric=185.26 vs. limit=4.1 2024-03-15 19:50:21,531 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.2.self_attn_weights, loss-sum=0.000e+00 2024-03-15 19:50:25,762 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.ff3_skip_rate, batch_count=500.0, ans=0.08875000000000001 2024-03-15 19:50:30,472 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=5.67 vs. limit=5.133333333333334 2024-03-15 19:50:30,597 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=31.11 vs. limit=5.266666666666667 2024-03-15 19:50:30,909 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=15.55 vs. limit=7.9 2024-03-15 19:50:58,400 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=600.0, ans=0.425 2024-03-15 19:51:00,528 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=384, metric=40.26 vs. limit=7.95 2024-03-15 19:51:02,138 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=192, metric=175.51 vs. limit=7.725 2024-03-15 19:51:05,028 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=25.93 vs. limit=7.725 2024-03-15 19:51:12,423 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.whiten, num_groups=1, num_channels=384, metric=8.47 vs. limit=4.253333333333333 2024-03-15 19:51:24,848 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer1.prob, batch_count=633.3333333333334, ans=0.4703125 2024-03-15 19:51:28,230 INFO [train_char.py:689] (0/2) Epoch 1, batch 200, loss[loss=0.6562, simple_loss=0.5591, pruned_loss=0.6572, over 24064.00 frames. ], tot_loss[loss=1.098, simple_loss=0.982, pruned_loss=0.9763, over 3051593.61 frames. ], batch size: 236, lr: 3.15e-02, grad_scale: 1.0 2024-03-15 19:51:32,584 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 4.353e+01 5.721e+01 6.735e+01 8.044e+01 1.996e+02, threshold=1.347e+02, percent-clipped=0.0 2024-03-15 19:51:35,021 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=174.50 vs. limit=5.333333333333333 2024-03-15 19:51:37,068 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=666.6666666666666, ans=0.46875 2024-03-15 19:51:42,082 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=512, metric=88.72 vs. limit=8.025 2024-03-15 19:51:53,908 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=133.75 vs. limit=5.35 2024-03-15 19:51:55,032 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.conv_module2.whiten, num_groups=1, num_channels=384, metric=18.87 vs. limit=7.7625 2024-03-15 19:51:57,637 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=733.3333333333334, ans=0.465625 2024-03-15 19:52:00,623 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.2.self_attn_weights, loss-sum=1.283e-01 2024-03-15 19:52:05,266 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=39.29 vs. limit=7.775 2024-03-15 19:52:18,200 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=56.62 vs. limit=7.7875 2024-03-15 19:52:29,962 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=6.36 vs. limit=5.2 2024-03-15 19:52:40,985 INFO [train_char.py:689] (0/2) Epoch 1, batch 250, loss[loss=0.6306, simple_loss=0.5325, pruned_loss=0.6177, over 24214.00 frames. ], tot_loss[loss=0.9211, simple_loss=0.8151, pruned_loss=0.8394, over 3445669.29 frames. ], batch size: 224, lr: 3.38e-02, grad_scale: 1.0 2024-03-15 19:52:43,904 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=833.3333333333334, ans=0.4609375 2024-03-15 19:52:45,294 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer1.prob, batch_count=833.3333333333334, ans=0.4609375 2024-03-15 19:52:53,124 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=85.26 vs. limit=7.8125 2024-03-15 19:52:57,077 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.whiten, num_groups=1, num_channels=512, metric=7.84 vs. limit=4.346666666666667 2024-03-15 19:52:58,397 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=384, metric=34.81 vs. limit=7.825 2024-03-15 19:52:58,647 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=46.60 vs. limit=5.433333333333334 2024-03-15 19:53:01,274 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=9.63 vs. limit=5.433333333333334 2024-03-15 19:53:10,607 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer1.max_abs, batch_count=866.6666666666666, ans=5.541666666666667 2024-03-15 19:53:23,894 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=182.28 vs. limit=7.8375 2024-03-15 19:53:24,978 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.balancer_na.min_abs, batch_count=900.0, ans=0.0076 2024-03-15 19:53:29,939 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.whiten1.whitening_limit, batch_count=933.3333333333334, ans=5.233333333333333 2024-03-15 19:53:33,728 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=933.3333333333334, ans=0.45625 2024-03-15 19:53:55,628 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=33.73 vs. limit=7.8625 2024-03-15 19:53:57,983 INFO [train_char.py:689] (0/2) Epoch 1, batch 300, loss[loss=0.5796, simple_loss=0.4857, pruned_loss=0.5544, over 24080.00 frames. ], tot_loss[loss=0.8017, simple_loss=0.7027, pruned_loss=0.7406, over 3754083.38 frames. ], batch size: 223, lr: 3.60e-02, grad_scale: 2.0 2024-03-15 19:53:58,221 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_skip_rate, batch_count=1000.0, ans=0.1625 2024-03-15 19:54:02,222 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.426e+01 8.503e+01 1.019e+02 1.204e+02 1.857e+02, threshold=2.037e+02, percent-clipped=13.0 2024-03-15 19:54:05,831 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.conv_module2.whiten, num_groups=1, num_channels=512, metric=41.88 vs. limit=7.875 2024-03-15 19:54:06,859 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer1.prob, batch_count=1000.0, ans=0.453125 2024-03-15 19:54:11,224 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer2.min_positive, batch_count=1033.3333333333333, ans=0.09354166666666668 2024-03-15 19:54:18,959 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.whiten, num_groups=1, num_channels=512, metric=6.61 vs. limit=4.413333333333333 2024-03-15 19:54:20,733 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=116.19 vs. limit=7.8875 2024-03-15 19:54:22,103 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=192, metric=33.24 vs. limit=5.516666666666667 2024-03-15 19:54:24,858 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=29.60 vs. limit=7.8875 2024-03-15 19:54:30,515 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.nonlin_attention.whiten2, num_groups=1, num_channels=512, metric=14.05 vs. limit=5.533333333333333 2024-03-15 19:54:34,752 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=35.16 vs. limit=7.9 2024-03-15 19:54:34,960 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.self_attn2.whiten, num_groups=1, num_channels=512, metric=34.53 vs. limit=8.3 2024-03-15 19:54:40,135 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=1100.0, ans=0.289 2024-03-15 19:54:40,938 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=256, metric=9.63 vs. limit=8.325 2024-03-15 19:54:45,054 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.conv_module2.whiten, num_groups=1, num_channels=512, metric=45.77 vs. limit=7.9125 2024-03-15 19:54:52,186 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.conv_module2.whiten, num_groups=1, num_channels=512, metric=50.75 vs. limit=7.9125 2024-03-15 19:55:05,453 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=6.80 vs. limit=5.283333333333333 2024-03-15 19:55:08,119 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.self_attn2.whiten, num_groups=1, num_channels=512, metric=32.58 vs. limit=8.35 2024-03-15 19:55:09,819 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=68.61 vs. limit=7.9375 2024-03-15 19:55:10,369 INFO [train_char.py:689] (0/2) Epoch 1, batch 350, loss[loss=0.4761, simple_loss=0.3912, pruned_loss=0.4622, over 24223.00 frames. ], tot_loss[loss=0.7195, simple_loss=0.6252, pruned_loss=0.667, over 3994068.43 frames. ], batch size: 122, lr: 3.83e-02, grad_scale: 2.0 2024-03-15 19:55:25,616 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module1.balancer2.min_abs, batch_count=1166.6666666666667, ans=0.21750000000000003 2024-03-15 19:55:30,047 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.out_combiner.scale_min, batch_count=1200.0, ans=0.858 2024-03-15 19:55:30,563 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.nonlin_attention.whiten2, num_groups=1, num_channels=512, metric=13.60 vs. limit=5.6 2024-03-15 19:55:31,872 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=384, metric=30.89 vs. limit=7.95 2024-03-15 19:55:34,987 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=512, metric=62.08 vs. limit=7.95 2024-03-15 19:55:44,651 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.whiten, num_groups=1, num_channels=384, metric=4.68 vs. limit=4.493333333333333 2024-03-15 19:55:53,751 INFO [scaling.py:1023] (0/2) Whitening: name=encoder_embed.out_whiten, num_groups=1, num_channels=192, metric=5.13 vs. limit=4.246666666666667 2024-03-15 19:55:55,604 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.0.self_attn_weights, loss-sum=1.007e+01 2024-03-15 19:56:02,009 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.whiten, num_groups=1, num_channels=512, metric=6.29 vs. limit=4.506666666666667 2024-03-15 19:56:02,127 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=384, metric=15.17 vs. limit=8.45 2024-03-15 19:56:08,320 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=192, metric=168.43 vs. limit=7.975 2024-03-15 19:56:12,171 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=384, metric=14.08 vs. limit=7.9875 2024-03-15 19:56:19,600 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=352.22 vs. limit=7.9875 2024-03-15 19:56:28,192 INFO [train_char.py:689] (0/2) Epoch 1, batch 400, loss[loss=0.5118, simple_loss=0.4253, pruned_loss=0.4604, over 24106.00 frames. ], tot_loss[loss=0.6632, simple_loss=0.5716, pruned_loss=0.6128, over 4182411.62 frames. ], batch size: 279, lr: 4.05e-02, grad_scale: 4.0 2024-03-15 19:56:32,516 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.902e+01 9.628e+01 1.167e+02 1.405e+02 3.100e+02, threshold=2.334e+02, percent-clipped=3.0 2024-03-15 19:56:36,354 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=23.89 vs. limit=8.0 2024-03-15 19:56:40,377 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=384, metric=17.84 vs. limit=8.5 2024-03-15 19:56:43,311 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.nonlin_attention.whiten2, num_groups=1, num_channels=512, metric=18.23 vs. limit=5.683333333333334 2024-03-15 19:56:45,653 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_skip_rate, batch_count=1366.6666666666667, ans=0.14875 2024-03-15 19:56:50,598 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=118.55 vs. limit=8.0125 2024-03-15 19:57:05,918 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=512, metric=141.53 vs. limit=8.025 2024-03-15 19:57:07,708 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.self_attn1.whiten, num_groups=1, num_channels=192, metric=11.90 vs. limit=8.55 2024-03-15 19:57:12,603 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.2.self_attn_weights, loss-sum=0.000e+00 2024-03-15 19:57:12,918 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=256, metric=16.14 vs. limit=8.0375 2024-03-15 19:57:20,723 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=256, metric=10.70 vs. limit=8.0375 2024-03-15 19:57:29,334 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=384, metric=17.07 vs. limit=8.6 2024-03-15 19:57:39,163 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=45.60 vs. limit=8.05 2024-03-15 19:57:42,604 INFO [train_char.py:689] (0/2) Epoch 1, batch 450, loss[loss=0.5598, simple_loss=0.4623, pruned_loss=0.4939, over 24122.00 frames. ], tot_loss[loss=0.6228, simple_loss=0.5328, pruned_loss=0.5709, over 4327303.48 frames. ], batch size: 236, lr: 4.28e-02, grad_scale: 4.0 2024-03-15 19:57:49,975 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=1500.0, ans=0.3125 2024-03-15 19:57:50,638 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=7.25 vs. limit=5.375 2024-03-15 19:58:00,053 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.out_combiner.scale_min, batch_count=1533.3333333333333, ans=0.8463333333333334 2024-03-15 19:58:11,491 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer2.min_abs, batch_count=1566.6666666666667, ans=0.2235 2024-03-15 19:58:22,627 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.balancer2.prob, batch_count=1566.6666666666667, ans=0.4265625 2024-03-15 19:58:28,928 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=6.26 vs. limit=5.4 2024-03-15 19:58:40,569 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=256, metric=12.04 vs. limit=8.725 2024-03-15 19:58:41,584 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=24.71 vs. limit=8.1125 2024-03-15 19:58:54,216 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module1.balancer1.prob, batch_count=1666.6666666666667, ans=0.421875 2024-03-15 19:58:55,435 INFO [train_char.py:689] (0/2) Epoch 1, batch 500, loss[loss=0.5524, simple_loss=0.4561, pruned_loss=0.4713, over 23996.00 frames. ], tot_loss[loss=0.5944, simple_loss=0.5051, pruned_loss=0.5382, over 4439852.39 frames. ], batch size: 250, lr: 4.49e-02, grad_scale: 8.0 2024-03-15 19:58:59,595 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=230.84 vs. limit=8.125 2024-03-15 19:59:00,226 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.666e+01 9.891e+01 1.166e+02 1.626e+02 2.774e+02, threshold=2.332e+02, percent-clipped=3.0 2024-03-15 19:59:02,177 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.nonlin_attention.balancer.prob, batch_count=1666.6666666666667, ans=0.421875 2024-03-15 19:59:03,927 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=256, metric=12.19 vs. limit=8.125 2024-03-15 19:59:06,543 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-1.pt 2024-03-15 19:59:52,132 INFO [train_char.py:689] (0/2) Epoch 2, batch 0, loss[loss=0.4253, simple_loss=0.3536, pruned_loss=0.3549, over 24135.00 frames. ], tot_loss[loss=0.4253, simple_loss=0.3536, pruned_loss=0.3549, over 24135.00 frames. ], batch size: 362, lr: 4.41e-02, grad_scale: 16.0 2024-03-15 19:59:52,133 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 20:00:04,651 INFO [train_char.py:721] (0/2) Epoch 2, validation: loss=0.4496, simple_loss=0.3723, pruned_loss=0.379, over 657665.00 frames. 2024-03-15 20:00:04,652 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 20:00:08,758 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=15.66 vs. limit=8.13375 2024-03-15 20:00:20,119 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=384, metric=12.65 vs. limit=8.7925 2024-03-15 20:00:20,177 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=384, metric=19.44 vs. limit=8.7925 2024-03-15 20:00:21,690 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=192, metric=89.06 vs. limit=8.14625 2024-03-15 20:00:26,250 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=15.40 vs. limit=8.14625 2024-03-15 20:00:27,610 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.feed_forward2.out_whiten, num_groups=1, num_channels=512, metric=45.76 vs. limit=8.14625 2024-03-15 20:00:33,552 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.self_attn1.whiten.whitening_limit, batch_count=1756.6666666666667, ans=8.817499999999999 2024-03-15 20:00:34,979 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=18.87 vs. limit=8.15875 2024-03-15 20:00:47,519 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=512, metric=30.69 vs. limit=8.817499999999999 2024-03-15 20:00:47,625 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=384, metric=14.52 vs. limit=8.817499999999999 2024-03-15 20:00:52,036 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.whiten.whitening_limit, batch_count=1790.0, ans=4.716 2024-03-15 20:01:01,222 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.self_attn1.whiten, num_groups=1, num_channels=512, metric=27.00 vs. limit=8.8425 2024-03-15 20:01:08,493 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=256, metric=12.65 vs. limit=8.18375 2024-03-15 20:01:09,998 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=9.93 vs. limit=8.18375 2024-03-15 20:01:11,623 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=384, metric=18.44 vs. limit=8.18375 2024-03-15 20:01:14,845 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=512, metric=25.46 vs. limit=8.8675 2024-03-15 20:01:17,818 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.conv_module2.whiten, num_groups=1, num_channels=512, metric=30.59 vs. limit=8.18375 2024-03-15 20:01:19,410 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.self_attn1.whiten.whitening_limit, batch_count=1856.6666666666667, ans=8.8925 2024-03-15 20:01:19,894 INFO [train_char.py:689] (0/2) Epoch 2, batch 50, loss[loss=0.4311, simple_loss=0.3544, pruned_loss=0.3593, over 24370.00 frames. ], tot_loss[loss=0.4563, simple_loss=0.3766, pruned_loss=0.3818, over 1091499.65 frames. ], batch size: 152, lr: 4.41e-02, grad_scale: 16.0 2024-03-15 20:01:34,062 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=7.44 vs. limit=5.945 2024-03-15 20:01:36,772 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module1.balancer1.max_abs, batch_count=1890.0, ans=6.18125 2024-03-15 20:01:43,440 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.balancer.min_positive, batch_count=1890.0, ans=0.2311 2024-03-15 20:01:59,287 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=14.35 vs. limit=5.961666666666667 2024-03-15 20:02:01,649 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=1923.3333333333333, ans=0.40984375 2024-03-15 20:02:15,664 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=31.26 vs. limit=5.9783333333333335 2024-03-15 20:02:16,424 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.balancer.min_positive, batch_count=1956.6666666666667, ans=0.23043333333333332 2024-03-15 20:02:18,549 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=109.09 vs. limit=8.23375 2024-03-15 20:02:20,220 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=19.16 vs. limit=8.23375 2024-03-15 20:02:37,220 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.545e+01 1.311e+02 1.573e+02 1.999e+02 7.233e+02, threshold=3.146e+02, percent-clipped=17.0 2024-03-15 20:02:39,145 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=1990.0, ans=0.40671875 2024-03-15 20:02:42,131 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.nonlin_attention.balancer.max_positive, batch_count=2023.3333333333333, ans=0.7702333333333333 2024-03-15 20:02:42,145 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer1.min_positive, batch_count=2023.3333333333333, ans=0.04367708333333334 2024-03-15 20:02:43,180 INFO [train_char.py:689] (0/2) Epoch 2, batch 100, loss[loss=0.5631, simple_loss=0.4657, pruned_loss=0.4501, over 24237.00 frames. ], tot_loss[loss=0.4533, simple_loss=0.3743, pruned_loss=0.3731, over 1910981.22 frames. ], batch size: 212, lr: 4.41e-02, grad_scale: 16.0 2024-03-15 20:02:44,943 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=2023.3333333333333, ans=0.27976666666666666 2024-03-15 20:03:03,191 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=18.53 vs. limit=8.27125 2024-03-15 20:03:04,378 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=17.35 vs. limit=8.27125 2024-03-15 20:03:06,891 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module2.balancer1.prob, batch_count=2056.6666666666665, ans=0.40359375 2024-03-15 20:03:10,858 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=2090.0, ans=0.40203125 2024-03-15 20:03:14,414 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=16.11 vs. limit=8.28375 2024-03-15 20:03:16,966 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=18.77 vs. limit=8.28375 2024-03-15 20:03:32,773 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=384, metric=13.59 vs. limit=9.092500000000001 2024-03-15 20:03:38,505 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=384, metric=20.26 vs. limit=8.29625 2024-03-15 20:03:40,071 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=7.42 vs. limit=5.539166666666667 2024-03-15 20:03:40,167 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=384, metric=9.78 vs. limit=8.30875 2024-03-15 20:03:43,655 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward3.hidden_balancer.prob, batch_count=2156.6666666666665, ans=0.39890625 2024-03-15 20:03:54,525 INFO [train_char.py:689] (0/2) Epoch 2, batch 150, loss[loss=0.3897, simple_loss=0.3249, pruned_loss=0.2984, over 24306.00 frames. ], tot_loss[loss=0.4512, simple_loss=0.3734, pruned_loss=0.3638, over 2553806.13 frames. ], batch size: 116, lr: 4.40e-02, grad_scale: 16.0 2024-03-15 20:04:13,282 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=2223.3333333333335, ans=0.39578125 2024-03-15 20:04:19,520 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.whiten, num_groups=1, num_channels=512, metric=7.22 vs. limit=4.889333333333333 2024-03-15 20:04:20,472 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.0.self_attn_weights, loss-sum=6.790e+01 2024-03-15 20:04:32,686 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=384, metric=13.79 vs. limit=8.34625 2024-03-15 20:04:36,694 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=13.21 vs. limit=8.35875 2024-03-15 20:04:38,174 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=384, metric=22.28 vs. limit=9.2175 2024-03-15 20:04:40,606 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module1.balancer2.prob, batch_count=2290.0, ans=0.39265625 2024-03-15 20:04:40,934 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=7.37 vs. limit=5.5725 2024-03-15 20:05:00,127 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.330e+01 1.356e+02 1.673e+02 2.299e+02 9.080e+02, threshold=3.347e+02, percent-clipped=8.0 2024-03-15 20:05:00,373 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer2.min_positive, batch_count=2323.3333333333335, ans=0.08547916666666668 2024-03-15 20:05:09,554 INFO [train_char.py:689] (0/2) Epoch 2, batch 200, loss[loss=0.5116, simple_loss=0.4284, pruned_loss=0.3801, over 24113.00 frames. ], tot_loss[loss=0.4454, simple_loss=0.37, pruned_loss=0.3505, over 3052463.95 frames. ], batch size: 236, lr: 4.40e-02, grad_scale: 16.0 2024-03-15 20:05:16,951 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff2_skip_rate, batch_count=2356.6666666666665, ans=0.046975 2024-03-15 20:05:20,228 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=8.26 vs. limit=5.589166666666666 2024-03-15 20:05:26,164 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=512, metric=5.58 vs. limit=6.195 2024-03-15 20:05:26,309 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=5.77 vs. limit=6.195 2024-03-15 20:05:29,061 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=32.15 vs. limit=8.39625 2024-03-15 20:05:50,090 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=7.22 vs. limit=8.40875 2024-03-15 20:05:58,675 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=15.17 vs. limit=8.42125 2024-03-15 20:05:58,690 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=512, metric=16.74 vs. limit=9.3425 2024-03-15 20:06:15,978 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=16.90 vs. limit=8.43375 2024-03-15 20:06:23,636 INFO [train_char.py:689] (0/2) Epoch 2, batch 250, loss[loss=0.3166, simple_loss=0.2778, pruned_loss=0.2072, over 24185.00 frames. ], tot_loss[loss=0.4287, simple_loss=0.359, pruned_loss=0.3275, over 3442293.27 frames. ], batch size: 122, lr: 4.40e-02, grad_scale: 16.0 2024-03-15 20:06:25,387 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=2523.3333333333335, ans=0.27476666666666666 2024-03-15 20:06:28,206 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer2.prob, batch_count=2523.3333333333335, ans=0.38171875 2024-03-15 20:06:32,522 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module1.balancer1.prob, batch_count=2523.3333333333335, ans=0.38171875 2024-03-15 20:06:35,838 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=384, metric=16.19 vs. limit=9.3925 2024-03-15 20:06:41,487 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=384, metric=11.38 vs. limit=9.4175 2024-03-15 20:06:45,290 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=2556.6666666666665, ans=0.2744333333333333 2024-03-15 20:06:45,937 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=15.92 vs. limit=8.45875 2024-03-15 20:06:48,298 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=7.52 vs. limit=6.278333333333333 2024-03-15 20:07:00,719 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff3_skip_rate, batch_count=2590.0, ans=0.041725 2024-03-15 20:07:09,120 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.attention_skip_rate, batch_count=2623.3333333333335, ans=0.101625 2024-03-15 20:07:16,976 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=9.12 vs. limit=8.48375 2024-03-15 20:07:18,307 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=9.26 vs. limit=8.48375 2024-03-15 20:07:23,585 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.balancer2.prob, batch_count=2656.6666666666665, ans=0.37546875 2024-03-15 20:07:25,657 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=192, metric=12.59 vs. limit=8.49625 2024-03-15 20:07:28,752 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.293e+01 1.310e+02 1.524e+02 2.043e+02 4.110e+02, threshold=3.049e+02, percent-clipped=1.0 2024-03-15 20:07:29,740 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=192, metric=8.95 vs. limit=8.49625 2024-03-15 20:07:34,195 INFO [train_char.py:689] (0/2) Epoch 2, batch 300, loss[loss=0.4135, simple_loss=0.3614, pruned_loss=0.2698, over 24225.00 frames. ], tot_loss[loss=0.4127, simple_loss=0.3491, pruned_loss=0.3045, over 3746438.40 frames. ], batch size: 212, lr: 4.40e-02, grad_scale: 16.0 2024-03-15 20:07:38,769 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff3_skip_rate, batch_count=2690.0, ans=0.039475 2024-03-15 20:07:44,990 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=384, metric=11.16 vs. limit=9.5175 2024-03-15 20:07:51,481 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.conv.5.prob, batch_count=2723.3333333333335, ans=0.37234375 2024-03-15 20:07:58,410 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=12.02 vs. limit=8.52125 2024-03-15 20:08:06,175 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=384, metric=11.13 vs. limit=9.567499999999999 2024-03-15 20:08:12,951 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=11.09 vs. limit=8.53375 2024-03-15 20:08:13,633 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.balancer_na.min_abs, batch_count=2756.6666666666665, ans=0.015026666666666666 2024-03-15 20:08:13,647 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=2756.6666666666665, ans=0.1554166666666667 2024-03-15 20:08:33,395 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.whiten, num_groups=1, num_channels=512, metric=5.25 vs. limit=5.129333333333333 2024-03-15 20:08:42,992 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=11.29 vs. limit=8.55875 2024-03-15 20:08:43,898 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=2823.3333333333335, ans=0.27176666666666666 2024-03-15 20:08:47,730 INFO [train_char.py:689] (0/2) Epoch 2, batch 350, loss[loss=0.3783, simple_loss=0.3359, pruned_loss=0.2355, over 24231.00 frames. ], tot_loss[loss=0.3916, simple_loss=0.3349, pruned_loss=0.279, over 3987319.97 frames. ], batch size: 224, lr: 4.40e-02, grad_scale: 16.0 2024-03-15 20:08:56,488 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.self_attn2.whiten, num_groups=1, num_channels=512, metric=13.13 vs. limit=9.6425 2024-03-15 20:08:59,124 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=9.22 vs. limit=8.57125 2024-03-15 20:09:00,176 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer2.min_abs, batch_count=2890.0, ans=0.24335 2024-03-15 20:09:13,829 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=7.21 vs. limit=5.730833333333333 2024-03-15 20:09:17,516 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=2923.3333333333335, ans=0.36296875 2024-03-15 20:09:19,458 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=9.11 vs. limit=8.59625 2024-03-15 20:09:20,172 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.attention_skip_rate, batch_count=2923.3333333333335, ans=0.09037499999999998 2024-03-15 20:09:23,441 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=384, metric=9.77 vs. limit=9.692499999999999 2024-03-15 20:09:28,983 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=12.24 vs. limit=9.7175 2024-03-15 20:09:45,888 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer1.prob, batch_count=2990.0, ans=0.35984375 2024-03-15 20:09:52,603 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 9.804e+01 1.410e+02 1.649e+02 2.083e+02 3.278e+02, threshold=3.297e+02, percent-clipped=3.0 2024-03-15 20:09:56,407 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=11.92 vs. limit=8.62125 2024-03-15 20:09:58,109 INFO [train_char.py:689] (0/2) Epoch 2, batch 400, loss[loss=0.2688, simple_loss=0.2432, pruned_loss=0.1588, over 23970.00 frames. ], tot_loss[loss=0.3712, simple_loss=0.3216, pruned_loss=0.2546, over 4176740.21 frames. ], batch size: 381, lr: 4.40e-02, grad_scale: 32.0 2024-03-15 20:10:13,272 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.1.self_attn_weights, loss-sum=1.543e+02 2024-03-15 20:10:15,797 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff3_skip_rate, batch_count=3056.6666666666665, ans=0.031225000000000003 2024-03-15 20:10:28,040 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=10.31 vs. limit=8.65875 2024-03-15 20:10:29,067 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=8.51 vs. limit=8.65875 2024-03-15 20:10:30,010 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward3.hidden_balancer.prob, batch_count=3090.0, ans=0.35515625 2024-03-15 20:10:40,667 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.attention_skip_rate, batch_count=3123.3333333333335, ans=0.08287499999999998 2024-03-15 20:10:45,827 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-15 20:10:49,747 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module1.balancer1.prob, batch_count=3156.6666666666665, ans=0.35203125 2024-03-15 20:10:49,779 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=3156.6666666666665, ans=0.35203125 2024-03-15 20:11:02,388 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.out_combiner.scale_min, batch_count=3156.6666666666665, ans=0.7895166666666666 2024-03-15 20:11:06,219 INFO [train_char.py:689] (0/2) Epoch 2, batch 450, loss[loss=0.2941, simple_loss=0.2765, pruned_loss=0.1577, over 24122.00 frames. ], tot_loss[loss=0.3537, simple_loss=0.3105, pruned_loss=0.2337, over 4323250.97 frames. ], batch size: 223, lr: 4.39e-02, grad_scale: 16.0 2024-03-15 20:11:06,491 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer1.prob, batch_count=3190.0, ans=0.35046875 2024-03-15 20:11:11,052 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=512, metric=9.93 vs. limit=8.69625 2024-03-15 20:11:13,731 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=512, metric=6.94 vs. limit=6.595 2024-03-15 20:11:15,787 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=3190.0, ans=0.2681 2024-03-15 20:11:20,971 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff2_skip_rate, batch_count=3223.3333333333335, ans=0.027475 2024-03-15 20:11:22,712 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=20.06 vs. limit=6.611666666666666 2024-03-15 20:11:24,119 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=6.00 vs. limit=5.805833333333333 2024-03-15 20:11:50,450 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_skip_rate, batch_count=3290.0, ans=0.076625 2024-03-15 20:11:58,558 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.self_attn2.whiten, num_groups=1, num_channels=384, metric=10.58 vs. limit=9.9675 2024-03-15 20:12:10,471 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 1.201e+02 1.533e+02 1.846e+02 2.254e+02 4.152e+02, threshold=3.692e+02, percent-clipped=4.0 2024-03-15 20:12:13,984 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=384, metric=10.58 vs. limit=10.0175 2024-03-15 20:12:14,463 INFO [train_char.py:689] (0/2) Epoch 2, batch 500, loss[loss=0.2737, simple_loss=0.2582, pruned_loss=0.1455, over 24091.00 frames. ], tot_loss[loss=0.3382, simple_loss=0.3013, pruned_loss=0.2146, over 4435005.57 frames. ], batch size: 279, lr: 4.39e-02, grad_scale: 16.0 2024-03-15 20:12:18,764 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer1.prob, batch_count=3356.6666666666665, ans=0.34265625 2024-03-15 20:12:23,631 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-2.pt 2024-03-15 20:13:09,344 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_skip_rate, batch_count=3380.0, ans=0.07325000000000001 2024-03-15 20:13:10,343 INFO [train_char.py:689] (0/2) Epoch 3, batch 0, loss[loss=0.2721, simple_loss=0.2599, pruned_loss=0.1402, over 24296.00 frames. ], tot_loss[loss=0.2721, simple_loss=0.2599, pruned_loss=0.1402, over 24296.00 frames. ], batch size: 116, lr: 4.17e-02, grad_scale: 32.0 2024-03-15 20:13:10,344 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 20:13:22,072 INFO [train_char.py:721] (0/2) Epoch 3, validation: loss=0.1923, simple_loss=0.1987, pruned_loss=0.07892, over 657665.00 frames. 2024-03-15 20:13:22,077 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 20:13:35,082 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-15 20:13:35,556 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=10.92 vs. limit=8.78 2024-03-15 20:13:39,499 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=9.23 vs. limit=8.78 2024-03-15 20:13:44,884 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward3.hidden_balancer.prob, batch_count=3413.3333333333335, ans=0.33999999999999997 2024-03-15 20:13:44,886 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.bypass.skip_rate, batch_count=3413.3333333333335, ans=0.09899494936611666 2024-03-15 20:13:52,693 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.conv_module2.whiten, num_groups=1, num_channels=384, metric=9.30 vs. limit=8.7925 2024-03-15 20:14:00,874 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_skip_rate, batch_count=3446.6666666666665, ans=0.07075000000000001 2024-03-15 20:14:34,410 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=384, metric=13.21 vs. limit=10.135 2024-03-15 20:14:34,481 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.self_attn2.whiten, num_groups=1, num_channels=512, metric=13.66 vs. limit=10.135 2024-03-15 20:14:38,714 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=6.10 vs. limit=6.773333333333333 2024-03-15 20:14:39,178 INFO [train_char.py:689] (0/2) Epoch 3, batch 50, loss[loss=0.3063, simple_loss=0.2881, pruned_loss=0.1639, over 24140.00 frames. ], tot_loss[loss=0.2528, simple_loss=0.2402, pruned_loss=0.1322, over 1083179.57 frames. ], batch size: 251, lr: 4.17e-02, grad_scale: 16.0 2024-03-15 20:14:46,724 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=3546.6666666666665, ans=0.33375 2024-03-15 20:14:59,537 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass.scale_min, batch_count=3580.0, ans=0.7747 2024-03-15 20:15:03,810 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=3580.0, ans=0.2642 2024-03-15 20:15:17,100 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=192, metric=8.80 vs. limit=8.855 2024-03-15 20:15:30,815 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=10.90 vs. limit=8.8675 2024-03-15 20:15:37,031 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 9.786e+01 1.576e+02 1.794e+02 2.249e+02 5.227e+02, threshold=3.588e+02, percent-clipped=2.0 2024-03-15 20:15:39,262 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=384, metric=11.17 vs. limit=10.26 2024-03-15 20:15:49,478 INFO [train_char.py:689] (0/2) Epoch 3, batch 100, loss[loss=0.2836, simple_loss=0.2684, pruned_loss=0.1496, over 24214.00 frames. ], tot_loss[loss=0.239, simple_loss=0.2297, pruned_loss=0.1218, over 1908173.58 frames. ], batch size: 311, lr: 4.17e-02, grad_scale: 16.0 2024-03-15 20:15:55,368 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=8.40 vs. limit=8.8925 2024-03-15 20:16:00,128 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.feed_forward2.out_whiten, num_groups=1, num_channels=512, metric=8.79 vs. limit=8.8925 2024-03-15 20:16:06,794 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=3746.6666666666665, ans=0.32437499999999997 2024-03-15 20:16:11,109 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer2.prob, batch_count=3746.6666666666665, ans=0.32437499999999997 2024-03-15 20:16:20,656 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module2.balancer2.prob, batch_count=3780.0, ans=0.3228125 2024-03-15 20:16:22,478 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=10.14 vs. limit=8.9175 2024-03-15 20:16:23,256 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.ff3_skip_rate, batch_count=3780.0, ans=0.014949999999999991 2024-03-15 20:16:46,117 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=512, metric=10.49 vs. limit=10.36 2024-03-15 20:17:02,993 INFO [train_char.py:689] (0/2) Epoch 3, batch 150, loss[loss=0.1621, simple_loss=0.1711, pruned_loss=0.06546, over 24351.00 frames. ], tot_loss[loss=0.2384, simple_loss=0.2315, pruned_loss=0.1188, over 2550096.00 frames. ], batch size: 129, lr: 4.17e-02, grad_scale: 16.0 2024-03-15 20:17:11,457 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=3880.0, ans=0.2612 2024-03-15 20:17:30,073 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff3_skip_rate, batch_count=3913.3333333333335, ans=0.011949999999999988 2024-03-15 20:17:48,843 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass_mid.scale_min, batch_count=3980.0, ans=0.7607 2024-03-15 20:17:51,670 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.ff3_skip_rate, batch_count=3980.0, ans=0.010450000000000001 2024-03-15 20:17:56,275 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer1.prob, batch_count=3980.0, ans=0.31343750000000004 2024-03-15 20:18:02,891 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 1.004e+02 1.483e+02 1.805e+02 2.218e+02 4.449e+02, threshold=3.610e+02, percent-clipped=2.0 2024-03-15 20:18:04,606 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module2.balancer1.prob, batch_count=4013.3333333333335, ans=0.311875 2024-03-15 20:18:15,107 INFO [train_char.py:689] (0/2) Epoch 3, batch 200, loss[loss=0.2201, simple_loss=0.2171, pruned_loss=0.1066, over 24195.00 frames. ], tot_loss[loss=0.2308, simple_loss=0.2267, pruned_loss=0.1124, over 3054633.64 frames. ], batch size: 344, lr: 4.17e-02, grad_scale: 16.0 2024-03-15 20:18:18,651 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=9.79 vs. limit=9.0175 2024-03-15 20:18:53,227 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass.scale_min, batch_count=4113.333333333333, ans=0.7560333333333333 2024-03-15 20:19:04,255 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=4146.666666666667, ans=0.25853333333333334 2024-03-15 20:19:04,341 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass.skip_rate, batch_count=4146.666666666667, ans=0.07 2024-03-15 20:19:16,472 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.nonlin_attention.balancer.prob, batch_count=4180.0, ans=0.3040625 2024-03-15 20:19:23,025 INFO [train_char.py:689] (0/2) Epoch 3, batch 250, loss[loss=0.2587, simple_loss=0.2563, pruned_loss=0.1246, over 24222.00 frames. ], tot_loss[loss=0.2289, simple_loss=0.2262, pruned_loss=0.1101, over 3444629.39 frames. ], batch size: 266, lr: 4.16e-02, grad_scale: 16.0 2024-03-15 20:19:36,794 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.attention_skip_rate, batch_count=4246.666666666667, ans=0.04897222222222222 2024-03-15 20:19:52,753 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=512, metric=10.20 vs. limit=10.71 2024-03-15 20:20:04,683 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=9.61 vs. limit=9.1175 2024-03-15 20:20:17,388 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=9.34 vs. limit=9.1175 2024-03-15 20:20:23,327 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 9.012e+01 1.394e+02 1.554e+02 1.862e+02 4.696e+02, threshold=3.109e+02, percent-clipped=1.0 2024-03-15 20:20:35,121 INFO [train_char.py:689] (0/2) Epoch 3, batch 300, loss[loss=0.1707, simple_loss=0.1857, pruned_loss=0.0666, over 24401.00 frames. ], tot_loss[loss=0.2256, simple_loss=0.225, pruned_loss=0.1069, over 3752908.57 frames. ], batch size: 165, lr: 4.16e-02, grad_scale: 16.0 2024-03-15 20:20:36,822 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=4380.0, ans=0.2946875 2024-03-15 20:20:45,368 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=6.59 vs. limit=6.095 2024-03-15 20:20:47,454 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module2.balancer2.prob, batch_count=4413.333333333333, ans=0.29312499999999997 2024-03-15 20:20:47,986 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=10.96 vs. limit=9.155 2024-03-15 20:20:48,780 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass_mid.scale_min, batch_count=4413.333333333333, ans=0.7455333333333334 2024-03-15 20:21:02,182 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=6.37 vs. limit=6.111666666666666 2024-03-15 20:21:11,323 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=4446.666666666667, ans=0.25553333333333333 2024-03-15 20:21:14,737 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.whiten, num_groups=1, num_channels=256, metric=4.35 vs. limit=5.792 2024-03-15 20:21:21,922 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer2.prob, batch_count=4480.0, ans=0.29000000000000004 2024-03-15 20:21:25,959 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.out_combiner.scale_min, batch_count=4480.0, ans=0.7432000000000001 2024-03-15 20:21:27,118 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=4513.333333333333, ans=0.2548666666666667 2024-03-15 20:21:41,904 INFO [train_char.py:689] (0/2) Epoch 3, batch 350, loss[loss=0.2409, simple_loss=0.245, pruned_loss=0.1113, over 24232.00 frames. ], tot_loss[loss=0.221, simple_loss=0.2223, pruned_loss=0.1032, over 3994865.79 frames. ], batch size: 212, lr: 4.16e-02, grad_scale: 16.0 2024-03-15 20:21:42,629 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=384, metric=11.08 vs. limit=10.91 2024-03-15 20:21:46,649 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.whiten, num_groups=1, num_channels=512, metric=6.32 vs. limit=5.818666666666667 2024-03-15 20:21:53,984 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=7.71 vs. limit=9.205 2024-03-15 20:21:57,192 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module2.balancer2.prob, batch_count=4580.0, ans=0.28531249999999997 2024-03-15 20:22:04,343 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.self_attn2.whiten, num_groups=1, num_channels=512, metric=12.14 vs. limit=10.935 2024-03-15 20:22:36,928 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer1.prob, batch_count=4646.666666666667, ans=0.28218750000000004 2024-03-15 20:22:40,655 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 1.064e+02 1.434e+02 1.778e+02 2.135e+02 4.568e+02, threshold=3.555e+02, percent-clipped=3.0 2024-03-15 20:22:44,046 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=8.59 vs. limit=9.254999999999999 2024-03-15 20:22:52,747 INFO [train_char.py:689] (0/2) Epoch 3, batch 400, loss[loss=0.1958, simple_loss=0.2018, pruned_loss=0.08886, over 24249.00 frames. ], tot_loss[loss=0.2177, simple_loss=0.2208, pruned_loss=0.1004, over 4181950.07 frames. ], batch size: 328, lr: 4.16e-02, grad_scale: 32.0 2024-03-15 20:22:52,983 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward3.hidden_balancer.prob, batch_count=4713.333333333333, ans=0.2790625 2024-03-15 20:22:53,514 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=6.59 vs. limit=6.178333333333333 2024-03-15 20:23:20,040 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=12.07 vs. limit=11.085 2024-03-15 20:23:27,701 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_skip_rate, batch_count=4780.0, ans=0.04675 2024-03-15 20:23:34,152 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass.skip_rate, batch_count=4813.333333333333, ans=0.035 2024-03-15 20:23:58,863 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module2.balancer2.prob, batch_count=4846.666666666667, ans=0.2728125 2024-03-15 20:24:01,515 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=4880.0, ans=0.04633333333333334 2024-03-15 20:24:02,679 INFO [train_char.py:689] (0/2) Epoch 3, batch 450, loss[loss=0.2199, simple_loss=0.2323, pruned_loss=0.09571, over 24309.00 frames. ], tot_loss[loss=0.2157, simple_loss=0.2205, pruned_loss=0.09844, over 4326858.06 frames. ], batch size: 267, lr: 4.15e-02, grad_scale: 32.0 2024-03-15 20:24:08,185 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.1.self_attn_weights, loss-sum=8.167e-01 2024-03-15 20:24:12,454 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=8.96 vs. limit=7.4399999999999995 2024-03-15 20:24:19,992 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=4913.333333333333, ans=0.26968749999999997 2024-03-15 20:24:41,385 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.ff2_skip_rate, batch_count=4946.666666666667, ans=0.009794202898550725 2024-03-15 20:24:58,414 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 1.102e+02 1.409e+02 1.576e+02 1.756e+02 2.496e+02, threshold=3.151e+02, percent-clipped=0.0 2024-03-15 20:25:10,095 INFO [train_char.py:689] (0/2) Epoch 3, batch 500, loss[loss=0.201, simple_loss=0.2201, pruned_loss=0.08224, over 24214.00 frames. ], tot_loss[loss=0.2132, simple_loss=0.2203, pruned_loss=0.09579, over 4439178.36 frames. ], batch size: 212, lr: 4.15e-02, grad_scale: 32.0 2024-03-15 20:25:19,619 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-3.pt 2024-03-15 20:26:07,955 INFO [train_char.py:689] (0/2) Epoch 4, batch 0, loss[loss=0.1941, simple_loss=0.2014, pruned_loss=0.08841, over 24195.00 frames. ], tot_loss[loss=0.1941, simple_loss=0.2014, pruned_loss=0.08841, over 24195.00 frames. ], batch size: 311, lr: 3.88e-02, grad_scale: 32.0 2024-03-15 20:26:07,955 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 20:26:20,001 INFO [train_char.py:721] (0/2) Epoch 4, validation: loss=0.1315, simple_loss=0.1639, pruned_loss=0.03821, over 657665.00 frames. 2024-03-15 20:26:20,002 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 20:26:24,672 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=12.73 vs. limit=9.401250000000001 2024-03-15 20:26:28,517 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=12.04 vs. limit=7.535 2024-03-15 20:26:35,534 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=10.20 vs. limit=9.41375 2024-03-15 20:26:55,114 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.self_attn2.whiten, num_groups=1, num_channels=192, metric=10.53 vs. limit=11.3525 2024-03-15 20:27:08,402 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.attention_skip_rate, batch_count=5170.0, ans=0.045125000000000005 2024-03-15 20:27:11,195 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=5170.0, ans=0.0 2024-03-15 20:27:14,021 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=5170.0, ans=0.2483 2024-03-15 20:27:19,168 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=144, metric=6.00 vs. limit=6.300833333333333 2024-03-15 20:27:27,773 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_skip_rate, batch_count=5203.333333333333, ans=0.044986111111111116 2024-03-15 20:27:33,159 INFO [train_char.py:689] (0/2) Epoch 4, batch 50, loss[loss=0.167, simple_loss=0.1783, pruned_loss=0.07279, over 24058.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.1975, pruned_loss=0.08099, over 1082679.39 frames. ], batch size: 361, lr: 3.88e-02, grad_scale: 32.0 2024-03-15 20:27:36,708 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=6.30 vs. limit=6.309166666666667 2024-03-15 20:27:49,565 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.attention_skip_rate, batch_count=5270.0, ans=0.044708333333333336 2024-03-15 20:27:53,616 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.nonlin_attention.balancer.max_positive, batch_count=5270.0, ans=0.8027 2024-03-15 20:28:09,092 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=8.45 vs. limit=9.48875 2024-03-15 20:28:09,760 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.attention_skip_rate, batch_count=5303.333333333333, ans=0.044569444444444446 2024-03-15 20:28:12,302 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=5303.333333333333, ans=0.0 2024-03-15 20:28:18,215 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.skip_rate, batch_count=5336.666666666667, ans=0.04949747468305833 2024-03-15 20:28:18,830 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=6.75 vs. limit=6.3341666666666665 2024-03-15 20:28:22,009 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 1.159e+02 1.441e+02 1.598e+02 1.918e+02 4.201e+02, threshold=3.197e+02, percent-clipped=3.0 2024-03-15 20:28:25,328 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.whiten2.whitening_limit, batch_count=5336.666666666667, ans=7.668333333333333 2024-03-15 20:28:38,896 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=8.73 vs. limit=7.6850000000000005 2024-03-15 20:28:43,246 INFO [train_char.py:689] (0/2) Epoch 4, batch 100, loss[loss=0.182, simple_loss=0.1928, pruned_loss=0.08117, over 24183.00 frames. ], tot_loss[loss=0.183, simple_loss=0.1992, pruned_loss=0.07704, over 1912972.62 frames. ], batch size: 344, lr: 3.88e-02, grad_scale: 32.0 2024-03-15 20:28:51,250 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.balancer.max_positive, batch_count=5403.333333333333, ans=0.8040333333333333 2024-03-15 20:29:09,797 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.attention_skip_rate, batch_count=5436.666666666667, ans=0.04401388888888889 2024-03-15 20:29:17,723 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module1.balancer1.min_positive, batch_count=5470.0, ans=0.03290625 2024-03-15 20:29:20,369 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.attention_skip_rate, batch_count=5470.0, ans=0.043875000000000004 2024-03-15 20:29:39,724 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=10.15 vs. limit=9.56375 2024-03-15 20:29:45,101 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=6.58 vs. limit=6.384166666666667 2024-03-15 20:29:53,577 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module2.balancer1.prob, batch_count=5570.0, ans=0.23890625 2024-03-15 20:29:54,674 INFO [train_char.py:689] (0/2) Epoch 4, batch 150, loss[loss=0.2166, simple_loss=0.2296, pruned_loss=0.09734, over 24139.00 frames. ], tot_loss[loss=0.18, simple_loss=0.1974, pruned_loss=0.07534, over 2555777.15 frames. ], batch size: 279, lr: 3.87e-02, grad_scale: 32.0 2024-03-15 20:29:58,081 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=9.52 vs. limit=9.588750000000001 2024-03-15 20:29:59,701 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.conv_module1.whiten, num_groups=1, num_channels=192, metric=5.25 vs. limit=9.588750000000001 2024-03-15 20:30:04,242 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=5570.0, ans=0.2443 2024-03-15 20:30:05,626 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=5570.0, ans=0.2443 2024-03-15 20:30:06,522 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=4.10 vs. limit=3.8355 2024-03-15 20:30:13,738 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer1.min_positive, batch_count=5603.333333333333, ans=0.032489583333333336 2024-03-15 20:30:18,005 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.ff3_skip_rate, batch_count=5603.333333333333, ans=0.00965144927536232 2024-03-15 20:30:40,118 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.375e+02 1.537e+02 1.686e+02 3.623e+02, threshold=3.073e+02, percent-clipped=1.0 2024-03-15 20:30:40,362 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_skip_rate, batch_count=5670.0, ans=0.04304166666666667 2024-03-15 20:30:44,519 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.prob, batch_count=5670.0, ans=0.23421874999999998 2024-03-15 20:30:45,710 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward3.hidden_balancer.prob, batch_count=5670.0, ans=0.23421874999999998 2024-03-15 20:31:04,443 INFO [train_char.py:689] (0/2) Epoch 4, batch 200, loss[loss=0.1316, simple_loss=0.1591, pruned_loss=0.04684, over 24338.00 frames. ], tot_loss[loss=0.1776, simple_loss=0.1965, pruned_loss=0.07358, over 3059218.04 frames. ], batch size: 116, lr: 3.87e-02, grad_scale: 32.0 2024-03-15 20:31:04,677 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff3_skip_rate, batch_count=5736.666666666667, ans=0.009622463768115942 2024-03-15 20:31:22,094 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.self_attn_weights.pos_emb_skip_rate, batch_count=5770.0, ans=0.0 2024-03-15 20:31:24,683 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward3.hidden_balancer.prob, batch_count=5770.0, ans=0.22953125000000002 2024-03-15 20:31:32,982 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass.skip_rate, batch_count=5803.333333333333, ans=0.04949747468305833 2024-03-15 20:31:33,363 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=9.34 vs. limit=9.67625 2024-03-15 20:31:40,184 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=8.11 vs. limit=9.67625 2024-03-15 20:31:47,477 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer2.min_positive, batch_count=5836.666666666667, ans=0.06352083333333333 2024-03-15 20:32:14,789 INFO [train_char.py:689] (0/2) Epoch 4, batch 250, loss[loss=0.1803, simple_loss=0.1946, pruned_loss=0.08012, over 24162.00 frames. ], tot_loss[loss=0.176, simple_loss=0.1965, pruned_loss=0.07238, over 3440673.45 frames. ], batch size: 362, lr: 3.87e-02, grad_scale: 32.0 2024-03-15 20:32:26,984 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass_mid.scale_min, batch_count=5936.666666666667, ans=0.6922166666666667 2024-03-15 20:32:30,090 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=8, num_channels=256, metric=3.91 vs. limit=3.8905 2024-03-15 20:32:34,867 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward3.hidden_balancer.prob, batch_count=5936.666666666667, ans=0.22171875000000002 2024-03-15 20:32:45,221 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.bypass.scale_min, batch_count=5970.0, ans=0.69105 2024-03-15 20:32:59,463 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 1.052e+02 1.330e+02 1.542e+02 1.707e+02 5.716e+02, threshold=3.084e+02, percent-clipped=1.0 2024-03-15 20:33:06,609 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=10.25 vs. limit=9.76375 2024-03-15 20:33:06,784 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.whiten, num_groups=1, num_channels=512, metric=6.44 vs. limit=6.414666666666667 2024-03-15 20:33:14,173 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer1.prob, batch_count=6036.666666666667, ans=0.21703125 2024-03-15 20:33:20,518 INFO [train_char.py:689] (0/2) Epoch 4, batch 300, loss[loss=0.122, simple_loss=0.1564, pruned_loss=0.04009, over 24254.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.1949, pruned_loss=0.07048, over 3744085.89 frames. ], batch size: 134, lr: 3.87e-02, grad_scale: 32.0 2024-03-15 20:33:43,238 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=11.43 vs. limit=8.051666666666666 2024-03-15 20:33:46,581 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=6103.333333333333, ans=0.21390625000000002 2024-03-15 20:33:49,667 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=6.96 vs. limit=6.534166666666667 2024-03-15 20:33:50,524 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.attention_skip_rate, batch_count=6136.666666666667, ans=0.04109722222222222 2024-03-15 20:33:51,742 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.bypass.scale_min, batch_count=6136.666666666667, ans=0.6852166666666667 2024-03-15 20:33:55,150 INFO [scaling.py:1023] (0/2) Whitening: name=encoder_embed.out_whiten, num_groups=1, num_channels=192, metric=5.34 vs. limit=5.227333333333333 2024-03-15 20:34:01,968 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=6170.0, ans=0.21078124999999998 2024-03-15 20:34:14,965 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_skip_rate, batch_count=6203.333333333333, ans=0.04081944444444445 2024-03-15 20:34:16,304 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.balancer1.prob, batch_count=6203.333333333333, ans=0.20921875 2024-03-15 20:34:28,745 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer1.max_abs, batch_count=6236.666666666667, ans=8.897916666666667 2024-03-15 20:34:29,791 INFO [train_char.py:689] (0/2) Epoch 4, batch 350, loss[loss=0.1565, simple_loss=0.1716, pruned_loss=0.06919, over 24034.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.1944, pruned_loss=0.06976, over 3986294.20 frames. ], batch size: 381, lr: 3.86e-02, grad_scale: 32.0 2024-03-15 20:34:40,819 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.self_attn2.whiten.whitening_limit, batch_count=6236.666666666667, ans=12.1775 2024-03-15 20:35:13,930 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 9.936e+01 1.387e+02 1.543e+02 1.746e+02 2.964e+02, threshold=3.085e+02, percent-clipped=0.0 2024-03-15 20:35:17,220 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.self_attn1.whiten, num_groups=1, num_channels=512, metric=14.18 vs. limit=12.252500000000001 2024-03-15 20:35:36,173 INFO [train_char.py:689] (0/2) Epoch 4, batch 400, loss[loss=0.1858, simple_loss=0.2163, pruned_loss=0.07611, over 24107.00 frames. ], tot_loss[loss=0.171, simple_loss=0.1957, pruned_loss=0.06949, over 4174665.05 frames. ], batch size: 236, lr: 3.86e-02, grad_scale: 32.0 2024-03-15 20:35:50,955 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.prob, batch_count=6436.666666666667, ans=0.19828125000000002 2024-03-15 20:35:57,773 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=10.59 vs. limit=9.91375 2024-03-15 20:35:59,766 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.nonlin_attention.balancer.prob, batch_count=6436.666666666667, ans=0.19828125000000002 2024-03-15 20:36:01,591 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=9.71 vs. limit=9.91375 2024-03-15 20:36:10,255 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward3.hidden_balancer.prob, batch_count=6470.0, ans=0.19671875 2024-03-15 20:36:16,226 INFO [scaling.py:1023] (0/2) Whitening: name=encoder_embed.convnext.out_whiten, num_groups=1, num_channels=128, metric=4.22 vs. limit=5.0 2024-03-15 20:36:34,069 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=384, metric=4.89 vs. limit=9.95125 2024-03-15 20:36:43,147 INFO [train_char.py:689] (0/2) Epoch 4, batch 450, loss[loss=0.1319, simple_loss=0.1691, pruned_loss=0.04671, over 24299.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.1972, pruned_loss=0.0696, over 4320092.35 frames. ], batch size: 180, lr: 3.86e-02, grad_scale: 32.0 2024-03-15 20:36:50,951 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=6570.0, ans=0.19203125 2024-03-15 20:36:53,550 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer2.prob, batch_count=6570.0, ans=0.19203125 2024-03-15 20:36:53,601 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer1.prob, batch_count=6570.0, ans=0.19203125 2024-03-15 20:36:58,403 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.ff3_skip_rate, batch_count=6603.333333333333, ans=0.009434057971014494 2024-03-15 20:37:12,184 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_skip_rate, batch_count=6636.666666666667, ans=0.03901388888888889 2024-03-15 20:37:27,417 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 1.128e+02 1.382e+02 1.515e+02 1.771e+02 3.341e+02, threshold=3.031e+02, percent-clipped=1.0 2024-03-15 20:37:27,961 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=256, metric=12.64 vs. limit=12.502500000000001 2024-03-15 20:37:31,919 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer2.prob, batch_count=6670.0, ans=0.18734374999999998 2024-03-15 20:37:32,019 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=6670.0, ans=0.18734374999999998 2024-03-15 20:37:32,527 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=384, metric=13.66 vs. limit=12.502500000000001 2024-03-15 20:37:48,141 INFO [train_char.py:689] (0/2) Epoch 4, batch 500, loss[loss=0.1672, simple_loss=0.2065, pruned_loss=0.06395, over 24122.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.1979, pruned_loss=0.06857, over 4435830.04 frames. ], batch size: 236, lr: 3.85e-02, grad_scale: 32.0 2024-03-15 20:37:57,192 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-4.pt 2024-03-15 20:38:43,778 INFO [train_char.py:689] (0/2) Epoch 5, batch 0, loss[loss=0.1633, simple_loss=0.1878, pruned_loss=0.06943, over 24185.00 frames. ], tot_loss[loss=0.1633, simple_loss=0.1878, pruned_loss=0.06943, over 24185.00 frames. ], batch size: 311, lr: 3.59e-02, grad_scale: 32.0 2024-03-15 20:38:43,779 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 20:38:56,466 INFO [train_char.py:721] (0/2) Epoch 5, validation: loss=0.1042, simple_loss=0.1552, pruned_loss=0.02658, over 657665.00 frames. 2024-03-15 20:38:56,466 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 20:39:15,647 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer2.prob, batch_count=6793.333333333333, ans=0.18156250000000002 2024-03-15 20:39:21,756 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.self_attn1.whiten.whitening_limit, batch_count=6793.333333333333, ans=12.594999999999999 2024-03-15 20:39:33,207 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.min_positive, batch_count=6826.666666666667, ans=0.028666666666666667 2024-03-15 20:39:42,501 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module1.balancer1.prob, batch_count=6860.0, ans=0.17843750000000003 2024-03-15 20:40:00,424 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=12.11 vs. limit=10.085 2024-03-15 20:40:06,296 INFO [train_char.py:689] (0/2) Epoch 5, batch 50, loss[loss=0.1627, simple_loss=0.2005, pruned_loss=0.06249, over 24129.00 frames. ], tot_loss[loss=0.1497, simple_loss=0.1833, pruned_loss=0.0581, over 1089857.20 frames. ], batch size: 279, lr: 3.58e-02, grad_scale: 32.0 2024-03-15 20:40:37,824 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=6993.333333333333, ans=0.1721875 2024-03-15 20:40:39,324 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.conv_module1.whiten, num_groups=1, num_channels=384, metric=3.82 vs. limit=10.1225 2024-03-15 20:40:47,745 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 1.107e+02 1.334e+02 1.501e+02 1.668e+02 3.121e+02, threshold=3.001e+02, percent-clipped=1.0 2024-03-15 20:40:51,880 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.out_combiner.scale_min, batch_count=7026.666666666667, ans=0.6540666666666667 2024-03-15 20:40:56,413 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=9.42 vs. limit=10.135 2024-03-15 20:41:17,640 INFO [train_char.py:689] (0/2) Epoch 5, batch 100, loss[loss=0.1554, simple_loss=0.1844, pruned_loss=0.06323, over 24158.00 frames. ], tot_loss[loss=0.1468, simple_loss=0.1811, pruned_loss=0.05621, over 1916418.67 frames. ], batch size: 344, lr: 3.58e-02, grad_scale: 32.0 2024-03-15 20:41:17,943 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module2.balancer2.prob, batch_count=7093.333333333333, ans=0.16749999999999998 2024-03-15 20:41:31,980 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module1.balancer1.max_abs, batch_count=7126.666666666667, ans=9.454166666666666 2024-03-15 20:41:32,030 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_skip_rate, batch_count=7126.666666666667, ans=0.036972222222222226 2024-03-15 20:41:41,046 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.2.self_attn_weights, loss-sum=8.421e-03 2024-03-15 20:41:47,002 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=8.21 vs. limit=10.185 2024-03-15 20:42:00,570 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.scale_min, batch_count=7193.333333333333, ans=0.6482333333333334 2024-03-15 20:42:06,994 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=7193.333333333333, ans=0.22806666666666667 2024-03-15 20:42:26,269 INFO [train_char.py:689] (0/2) Epoch 5, batch 150, loss[loss=0.1245, simple_loss=0.1688, pruned_loss=0.04009, over 24287.00 frames. ], tot_loss[loss=0.1473, simple_loss=0.1822, pruned_loss=0.05615, over 2557726.75 frames. ], batch size: 146, lr: 3.58e-02, grad_scale: 32.0 2024-03-15 20:42:42,727 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=384, metric=4.98 vs. limit=10.235 2024-03-15 20:42:45,291 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=9.52 vs. limit=10.235 2024-03-15 20:42:47,262 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=7293.333333333333, ans=0.15812500000000002 2024-03-15 20:42:48,558 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer2.prob, batch_count=7293.333333333333, ans=0.15812500000000002 2024-03-15 20:43:01,621 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 9.239e+01 1.296e+02 1.474e+02 1.745e+02 2.581e+02, threshold=2.947e+02, percent-clipped=0.0 2024-03-15 20:43:32,185 INFO [train_char.py:689] (0/2) Epoch 5, batch 200, loss[loss=0.1333, simple_loss=0.1576, pruned_loss=0.05449, over 23821.00 frames. ], tot_loss[loss=0.1476, simple_loss=0.183, pruned_loss=0.05609, over 3056872.83 frames. ], batch size: 439, lr: 3.58e-02, grad_scale: 32.0 2024-03-15 20:43:33,714 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.nonlin_attention.balancer.prob, batch_count=7426.666666666667, ans=0.15187499999999998 2024-03-15 20:43:47,355 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.self_attn2.whiten, num_groups=1, num_channels=192, metric=11.44 vs. limit=13.07 2024-03-15 20:43:50,454 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.ff2_skip_rate, batch_count=7460.0, ans=0.009247826086956523 2024-03-15 20:43:57,073 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=7460.0, ans=0.035583333333333335 2024-03-15 20:44:00,993 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=7493.333333333333, ans=0.22506666666666666 2024-03-15 20:44:13,806 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-15 20:44:31,429 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module1.balancer2.prob, batch_count=7560.0, ans=0.145625 2024-03-15 20:44:40,046 INFO [train_char.py:689] (0/2) Epoch 5, batch 250, loss[loss=0.1469, simple_loss=0.1811, pruned_loss=0.05639, over 24186.00 frames. ], tot_loss[loss=0.1473, simple_loss=0.1827, pruned_loss=0.05599, over 3441007.74 frames. ], batch size: 328, lr: 3.57e-02, grad_scale: 32.0 2024-03-15 20:44:42,042 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=11.33 vs. limit=10.3475 2024-03-15 20:44:45,804 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=9.90 vs. limit=10.3475 2024-03-15 20:45:00,110 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=4.10 vs. limit=7.050666666666666 2024-03-15 20:45:17,201 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 1.039e+02 1.273e+02 1.431e+02 1.641e+02 2.139e+02, threshold=2.863e+02, percent-clipped=0.0 2024-03-15 20:45:21,387 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass_mid.scale_min, batch_count=7693.333333333333, ans=0.6307333333333334 2024-03-15 20:45:34,719 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=384, metric=14.43 vs. limit=13.295 2024-03-15 20:45:34,869 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.whiten1.whitening_limit, batch_count=7726.666666666667, ans=6.931666666666667 2024-03-15 20:45:43,968 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=12.25 vs. limit=13.295 2024-03-15 20:45:46,771 INFO [train_char.py:689] (0/2) Epoch 5, batch 300, loss[loss=0.125, simple_loss=0.1644, pruned_loss=0.04282, over 24352.00 frames. ], tot_loss[loss=0.147, simple_loss=0.183, pruned_loss=0.05547, over 3745907.11 frames. ], batch size: 158, lr: 3.57e-02, grad_scale: 32.0 2024-03-15 20:45:49,693 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-15 20:45:56,402 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.whiten1.whitening_limit, batch_count=7760.0, ans=6.9399999999999995 2024-03-15 20:46:51,033 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=7.63 vs. limit=10.46 2024-03-15 20:46:54,025 INFO [train_char.py:689] (0/2) Epoch 5, batch 350, loss[loss=0.1356, simple_loss=0.1739, pruned_loss=0.04863, over 23954.00 frames. ], tot_loss[loss=0.1479, simple_loss=0.1846, pruned_loss=0.05559, over 3981198.25 frames. ], batch size: 107, lr: 3.57e-02, grad_scale: 32.0 2024-03-15 20:46:56,758 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer1.min_positive, batch_count=7926.666666666667, ans=0.025229166666666667 2024-03-15 20:47:10,927 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=7960.0, ans=0.12687500000000002 2024-03-15 20:47:10,987 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer2.min_positive, batch_count=7960.0, ans=0.05025 2024-03-15 20:47:12,939 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=10.81 vs. limit=8.98 2024-03-15 20:47:16,067 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=7960.0, ans=0.0 2024-03-15 20:47:28,046 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff3_skip_rate, batch_count=7993.333333333333, ans=0.009131884057971015 2024-03-15 20:47:29,123 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 1.080e+02 1.281e+02 1.438e+02 1.669e+02 2.808e+02, threshold=2.877e+02, percent-clipped=0.0 2024-03-15 20:47:54,083 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=8.23 vs. limit=9.030000000000001 2024-03-15 20:47:59,564 INFO [train_char.py:689] (0/2) Epoch 5, batch 400, loss[loss=0.1631, simple_loss=0.2004, pruned_loss=0.06293, over 24112.00 frames. ], tot_loss[loss=0.1481, simple_loss=0.1849, pruned_loss=0.05563, over 4167885.93 frames. ], batch size: 279, lr: 3.56e-02, grad_scale: 32.0 2024-03-15 20:48:02,793 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=7.10 vs. limit=7.023333333333333 2024-03-15 20:48:12,960 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=256, metric=13.81 vs. limit=13.594999999999999 2024-03-15 20:48:14,637 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.hidden_balancer.prob, batch_count=8126.666666666667, ans=0.125 2024-03-15 20:48:31,161 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=8160.0, ans=0.21839999999999998 2024-03-15 20:49:00,642 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.hidden_balancer.prob, batch_count=8226.666666666666, ans=0.125 2024-03-15 20:49:03,119 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward2.hidden_balancer.prob, batch_count=8260.0, ans=0.125 2024-03-15 20:49:04,040 INFO [train_char.py:689] (0/2) Epoch 5, batch 450, loss[loss=0.1225, simple_loss=0.1668, pruned_loss=0.03907, over 24396.00 frames. ], tot_loss[loss=0.1473, simple_loss=0.1849, pruned_loss=0.05487, over 4317489.67 frames. ], batch size: 158, lr: 3.56e-02, grad_scale: 32.0 2024-03-15 20:49:18,927 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.ff3_skip_rate, batch_count=8293.333333333334, ans=0.009066666666666667 2024-03-15 20:49:32,618 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=7.23 vs. limit=7.081666666666667 2024-03-15 20:49:33,293 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=8326.666666666666, ans=0.21673333333333333 2024-03-15 20:49:37,072 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=8326.666666666666, ans=0.21673333333333333 2024-03-15 20:49:38,031 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 1.036e+02 1.311e+02 1.413e+02 1.606e+02 2.821e+02, threshold=2.826e+02, percent-clipped=0.0 2024-03-15 20:49:43,228 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass.scale_min, batch_count=8360.0, ans=0.6073999999999999 2024-03-15 20:49:45,667 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.scale_min, batch_count=8360.0, ans=0.6073999999999999 2024-03-15 20:49:49,420 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff2_skip_rate, batch_count=8360.0, ans=0.009052173913043478 2024-03-15 20:50:07,394 INFO [train_char.py:689] (0/2) Epoch 5, batch 500, loss[loss=0.1426, simple_loss=0.1875, pruned_loss=0.04889, over 24130.00 frames. ], tot_loss[loss=0.1479, simple_loss=0.1864, pruned_loss=0.05471, over 4433164.48 frames. ], batch size: 188, lr: 3.55e-02, grad_scale: 32.0 2024-03-15 20:50:10,635 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=384, metric=15.24 vs. limit=13.82 2024-03-15 20:50:12,534 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=8426.666666666666, ans=0.125 2024-03-15 20:50:16,375 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-5.pt 2024-03-15 20:51:05,269 INFO [train_char.py:689] (0/2) Epoch 6, batch 0, loss[loss=0.1102, simple_loss=0.1548, pruned_loss=0.03279, over 24453.00 frames. ], tot_loss[loss=0.1102, simple_loss=0.1548, pruned_loss=0.03279, over 24453.00 frames. ], batch size: 165, lr: 3.32e-02, grad_scale: 32.0 2024-03-15 20:51:05,270 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 20:51:16,938 INFO [train_char.py:721] (0/2) Epoch 6, validation: loss=0.09386, simple_loss=0.1462, pruned_loss=0.02078, over 657665.00 frames. 2024-03-15 20:51:16,938 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 20:51:31,742 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.ff3_skip_rate, batch_count=8483.333333333334, ans=0.00902536231884058 2024-03-15 20:51:51,397 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer1.prob, batch_count=8516.666666666666, ans=0.125 2024-03-15 20:51:51,425 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=8516.666666666666, ans=0.0 2024-03-15 20:51:59,235 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.ff2_skip_rate, batch_count=8550.0, ans=0.009010869565217391 2024-03-15 20:52:11,347 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.dropout.p, batch_count=8583.333333333334, ans=0.21416666666666667 2024-03-15 20:52:12,594 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.conv.8.prob, batch_count=8583.333333333334, ans=0.125 2024-03-15 20:52:14,138 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=8583.333333333334, ans=0.21416666666666667 2024-03-15 20:52:16,684 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=8583.333333333334, ans=0.125 2024-03-15 20:52:22,882 INFO [train_char.py:689] (0/2) Epoch 6, batch 50, loss[loss=0.1584, simple_loss=0.2033, pruned_loss=0.05672, over 24270.00 frames. ], tot_loss[loss=0.1394, simple_loss=0.1788, pruned_loss=0.04993, over 1082371.03 frames. ], batch size: 267, lr: 3.31e-02, grad_scale: 32.0 2024-03-15 20:52:57,567 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.674e+01 1.220e+02 1.395e+02 1.528e+02 2.413e+02, threshold=2.789e+02, percent-clipped=0.0 2024-03-15 20:52:57,858 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.bypass_mid.scale_min, batch_count=8683.333333333334, ans=0.5960833333333333 2024-03-15 20:52:57,870 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=8683.333333333334, ans=0.0 2024-03-15 20:53:03,956 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass.scale_min, batch_count=8683.333333333334, ans=0.5960833333333333 2024-03-15 20:53:07,636 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.attention_skip_rate, batch_count=8683.333333333334, ans=0.03048611111111111 2024-03-15 20:53:12,907 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer1.prob, batch_count=8716.666666666666, ans=0.125 2024-03-15 20:53:21,736 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module1.balancer2.prob, batch_count=8750.0, ans=0.125 2024-03-15 20:53:29,138 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.2.self_attn_weights, loss-sum=2.710e-03 2024-03-15 20:53:35,214 INFO [train_char.py:689] (0/2) Epoch 6, batch 100, loss[loss=0.1532, simple_loss=0.1892, pruned_loss=0.05861, over 24264.00 frames. ], tot_loss[loss=0.1373, simple_loss=0.1778, pruned_loss=0.04841, over 1914675.65 frames. ], batch size: 296, lr: 3.31e-02, grad_scale: 32.0 2024-03-15 20:53:38,150 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=8783.333333333334, ans=0.21216666666666667 2024-03-15 20:53:55,046 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.self_attn2.whiten, num_groups=1, num_channels=384, metric=14.06 vs. limit=14.1125 2024-03-15 20:53:57,173 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.3.self_attn_weights, loss-sum=0.000e+00 2024-03-15 20:53:57,189 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff3_skip_rate, batch_count=8816.666666666666, ans=0.008952898550724639 2024-03-15 20:54:19,427 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=512, metric=15.48 vs. limit=14.1625 2024-03-15 20:54:39,060 INFO [train_char.py:689] (0/2) Epoch 6, batch 150, loss[loss=0.09996, simple_loss=0.1217, pruned_loss=0.03909, over 22794.00 frames. ], tot_loss[loss=0.1356, simple_loss=0.1758, pruned_loss=0.0477, over 2559366.46 frames. ], batch size: 483, lr: 3.31e-02, grad_scale: 32.0 2024-03-15 20:54:46,968 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer2.prob, batch_count=8950.0, ans=0.125 2024-03-15 20:55:04,793 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 9.003e+01 1.266e+02 1.394e+02 1.550e+02 2.577e+02, threshold=2.787e+02, percent-clipped=0.0 2024-03-15 20:55:21,228 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=512, metric=10.83 vs. limit=10.89375 2024-03-15 20:55:28,102 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer1.prob, batch_count=9050.0, ans=0.125 2024-03-15 20:55:50,073 INFO [train_char.py:689] (0/2) Epoch 6, batch 200, loss[loss=0.126, simple_loss=0.1644, pruned_loss=0.04376, over 24253.00 frames. ], tot_loss[loss=0.1339, simple_loss=0.1743, pruned_loss=0.04678, over 3058677.01 frames. ], batch size: 328, lr: 3.30e-02, grad_scale: 32.0 2024-03-15 20:55:51,680 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass.skip_rate, batch_count=9116.666666666666, ans=0.04949747468305833 2024-03-15 20:55:58,470 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=7.40 vs. limit=7.279166666666667 2024-03-15 20:56:02,016 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.bypass_mid.scale_min, batch_count=9150.0, ans=0.57975 2024-03-15 20:56:08,325 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-15 20:56:10,299 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=192, metric=8.80 vs. limit=10.93125 2024-03-15 20:56:24,718 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff2_skip_rate, batch_count=9183.333333333334, ans=0.008873188405797101 2024-03-15 20:56:51,360 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer2.prob, batch_count=9250.0, ans=0.125 2024-03-15 20:56:51,444 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.out_combiner.scale_min, batch_count=9250.0, ans=0.57625 2024-03-15 20:56:53,706 INFO [train_char.py:689] (0/2) Epoch 6, batch 250, loss[loss=0.141, simple_loss=0.1791, pruned_loss=0.05148, over 24337.00 frames. ], tot_loss[loss=0.1329, simple_loss=0.1737, pruned_loss=0.04602, over 3448325.91 frames. ], batch size: 297, lr: 3.30e-02, grad_scale: 32.0 2024-03-15 20:56:53,952 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=9283.333333333334, ans=0.20716666666666667 2024-03-15 20:57:05,225 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.nonlin_attention.balancer.min_positive, batch_count=9316.666666666666, ans=0.15683333333333332 2024-03-15 20:57:07,732 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=9316.666666666666, ans=0.125 2024-03-15 20:57:09,430 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.whiten, num_groups=1, num_channels=384, metric=7.01 vs. limit=7.726666666666667 2024-03-15 20:57:19,277 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.505e+01 1.206e+02 1.322e+02 1.448e+02 1.971e+02, threshold=2.645e+02, percent-clipped=0.0 2024-03-15 20:57:39,469 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=9383.333333333334, ans=0.125 2024-03-15 20:57:47,002 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer1.prob, batch_count=9416.666666666666, ans=0.125 2024-03-15 20:57:47,011 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff2_skip_rate, batch_count=9416.666666666666, ans=0.008822463768115942 2024-03-15 20:57:59,163 INFO [train_char.py:689] (0/2) Epoch 6, batch 300, loss[loss=0.1362, simple_loss=0.1866, pruned_loss=0.04283, over 24110.00 frames. ], tot_loss[loss=0.1315, simple_loss=0.1724, pruned_loss=0.04528, over 3755069.03 frames. ], batch size: 188, lr: 3.30e-02, grad_scale: 32.0 2024-03-15 20:57:59,480 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=9450.0, ans=0.20550000000000002 2024-03-15 20:58:09,956 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=9450.0, ans=0.125 2024-03-15 20:58:57,608 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=11.81 vs. limit=11.09375 2024-03-15 20:59:03,063 INFO [train_char.py:689] (0/2) Epoch 6, batch 350, loss[loss=0.1207, simple_loss=0.1508, pruned_loss=0.04534, over 24017.00 frames. ], tot_loss[loss=0.1329, simple_loss=0.1739, pruned_loss=0.04597, over 3989474.75 frames. ], batch size: 381, lr: 3.29e-02, grad_scale: 32.0 2024-03-15 20:59:22,099 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=11.12 vs. limit=11.11875 2024-03-15 20:59:23,027 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=9650.0, ans=0.125 2024-03-15 20:59:27,800 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 9.602e+01 1.250e+02 1.405e+02 1.526e+02 4.340e+02, threshold=2.811e+02, percent-clipped=2.0 2024-03-15 20:59:51,084 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=9716.666666666666, ans=0.20283333333333334 2024-03-15 21:00:05,262 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=15.36 vs. limit=11.15625 2024-03-15 21:00:08,422 INFO [train_char.py:689] (0/2) Epoch 6, batch 400, loss[loss=0.1162, simple_loss=0.1555, pruned_loss=0.03847, over 24132.00 frames. ], tot_loss[loss=0.1332, simple_loss=0.1746, pruned_loss=0.04584, over 4179055.17 frames. ], batch size: 362, lr: 3.29e-02, grad_scale: 32.0 2024-03-15 21:00:16,100 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=9783.333333333334, ans=0.20216666666666666 2024-03-15 21:00:31,202 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=9816.666666666666, ans=0.125 2024-03-15 21:00:43,210 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=6.59 vs. limit=7.4625 2024-03-15 21:00:53,372 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass.skip_rate, batch_count=9883.333333333334, ans=0.07 2024-03-15 21:00:55,965 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=9.64 vs. limit=11.20625 2024-03-15 21:00:56,157 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=384, metric=16.01 vs. limit=14.9125 2024-03-15 21:01:12,950 INFO [train_char.py:689] (0/2) Epoch 6, batch 450, loss[loss=0.1263, simple_loss=0.1735, pruned_loss=0.03958, over 24346.00 frames. ], tot_loss[loss=0.1331, simple_loss=0.1752, pruned_loss=0.04548, over 4325859.29 frames. ], batch size: 180, lr: 3.28e-02, grad_scale: 32.0 2024-03-15 21:01:36,200 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.attention_skip_rate, batch_count=9983.333333333334, ans=0.025069444444444446 2024-03-15 21:01:38,549 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.947e+01 1.180e+02 1.287e+02 1.488e+02 2.013e+02, threshold=2.574e+02, percent-clipped=0.0 2024-03-15 21:01:51,307 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-15 21:01:54,307 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer1.min_positive, batch_count=10050.0, ans=0.025 2024-03-15 21:02:01,982 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module2.balancer1.prob, batch_count=10050.0, ans=0.125 2024-03-15 21:02:16,582 INFO [train_char.py:689] (0/2) Epoch 6, batch 500, loss[loss=0.1226, simple_loss=0.1698, pruned_loss=0.03771, over 24185.00 frames. ], tot_loss[loss=0.1334, simple_loss=0.1763, pruned_loss=0.04524, over 4441291.12 frames. ], batch size: 188, lr: 3.28e-02, grad_scale: 32.0 2024-03-15 21:02:21,718 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward3.hidden_balancer.prob, batch_count=10116.666666666666, ans=0.125 2024-03-15 21:02:25,339 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-6.pt 2024-03-15 21:03:13,382 INFO [train_char.py:689] (0/2) Epoch 7, batch 0, loss[loss=0.1165, simple_loss=0.1625, pruned_loss=0.03525, over 24325.00 frames. ], tot_loss[loss=0.1165, simple_loss=0.1625, pruned_loss=0.03525, over 24325.00 frames. ], batch size: 129, lr: 3.07e-02, grad_scale: 64.0 2024-03-15 21:03:13,383 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 21:03:25,714 INFO [train_char.py:721] (0/2) Epoch 7, validation: loss=0.08738, simple_loss=0.1405, pruned_loss=0.01713, over 657665.00 frames. 2024-03-15 21:03:25,714 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 21:03:43,173 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=10173.333333333334, ans=0.125 2024-03-15 21:04:07,865 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer1.prob, batch_count=10206.666666666666, ans=0.125 2024-03-15 21:04:16,564 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=384, metric=15.73 vs. limit=15.18 2024-03-15 21:04:17,319 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer2.prob, batch_count=10240.0, ans=0.125 2024-03-15 21:04:20,433 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=512, metric=15.25 vs. limit=15.18 2024-03-15 21:04:39,259 INFO [train_char.py:689] (0/2) Epoch 7, batch 50, loss[loss=0.09328, simple_loss=0.1374, pruned_loss=0.02458, over 24244.00 frames. ], tot_loss[loss=0.1203, simple_loss=0.1642, pruned_loss=0.03826, over 1081064.58 frames. ], batch size: 116, lr: 3.07e-02, grad_scale: 64.0 2024-03-15 21:04:44,709 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=10306.666666666666, ans=0.125 2024-03-15 21:04:51,670 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.self_attn_weights.pos_emb_skip_rate, batch_count=10340.0, ans=0.0 2024-03-15 21:04:56,563 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.529e+01 1.149e+02 1.269e+02 1.425e+02 2.118e+02, threshold=2.537e+02, percent-clipped=0.0 2024-03-15 21:05:02,966 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=384, metric=4.86 vs. limit=11.3775 2024-03-15 21:05:19,391 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=384, metric=14.45 vs. limit=15.305 2024-03-15 21:05:25,388 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=10406.666666666666, ans=0.0 2024-03-15 21:05:30,740 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_skip_rate, batch_count=10440.0, ans=0.02316666666666667 2024-03-15 21:05:44,152 INFO [train_char.py:689] (0/2) Epoch 7, batch 100, loss[loss=0.1429, simple_loss=0.1961, pruned_loss=0.04483, over 24085.00 frames. ], tot_loss[loss=0.1235, simple_loss=0.1672, pruned_loss=0.03989, over 1909702.55 frames. ], batch size: 236, lr: 3.07e-02, grad_scale: 64.0 2024-03-15 21:06:47,112 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.self_attn1.whiten, num_groups=1, num_channels=192, metric=13.52 vs. limit=15.455 2024-03-15 21:06:52,541 INFO [train_char.py:689] (0/2) Epoch 7, batch 150, loss[loss=0.1459, simple_loss=0.1865, pruned_loss=0.05262, over 24297.00 frames. ], tot_loss[loss=0.1228, simple_loss=0.1659, pruned_loss=0.0399, over 2554210.24 frames. ], batch size: 297, lr: 3.06e-02, grad_scale: 64.0 2024-03-15 21:06:52,742 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.hidden_balancer.prob, batch_count=10640.0, ans=0.125 2024-03-15 21:07:13,148 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.543e+01 1.182e+02 1.309e+02 1.499e+02 2.178e+02, threshold=2.618e+02, percent-clipped=0.0 2024-03-15 21:07:17,342 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=10673.333333333334, ans=0.0 2024-03-15 21:07:27,615 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=10706.666666666666, ans=0.125 2024-03-15 21:07:45,621 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=10740.0, ans=0.125 2024-03-15 21:08:00,626 INFO [train_char.py:689] (0/2) Epoch 7, batch 200, loss[loss=0.1496, simple_loss=0.2017, pruned_loss=0.04873, over 24318.00 frames. ], tot_loss[loss=0.1224, simple_loss=0.1657, pruned_loss=0.03954, over 3060434.22 frames. ], batch size: 267, lr: 3.06e-02, grad_scale: 64.0 2024-03-15 21:08:18,687 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=10840.0, ans=0.1916 2024-03-15 21:08:49,997 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer1.prob, batch_count=10940.0, ans=0.125 2024-03-15 21:08:58,908 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.ff2_skip_rate, batch_count=10940.0, ans=0.008491304347826087 2024-03-15 21:09:02,028 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=10.11 vs. limit=11.6025 2024-03-15 21:09:03,871 INFO [train_char.py:689] (0/2) Epoch 7, batch 250, loss[loss=0.1227, simple_loss=0.168, pruned_loss=0.03875, over 24259.00 frames. ], tot_loss[loss=0.1216, simple_loss=0.1649, pruned_loss=0.03914, over 3450352.26 frames. ], batch size: 328, lr: 3.06e-02, grad_scale: 64.0 2024-03-15 21:09:05,417 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=10973.333333333334, ans=0.19026666666666667 2024-03-15 21:09:23,133 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 9.070e+01 1.140e+02 1.283e+02 1.415e+02 2.671e+02, threshold=2.567e+02, percent-clipped=1.0 2024-03-15 21:09:56,341 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=6.55 vs. limit=7.7683333333333335 2024-03-15 21:10:08,313 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=11106.666666666666, ans=0.18893333333333334 2024-03-15 21:10:11,079 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer1.prob, batch_count=11106.666666666666, ans=0.125 2024-03-15 21:10:13,180 INFO [train_char.py:689] (0/2) Epoch 7, batch 300, loss[loss=0.09227, simple_loss=0.1394, pruned_loss=0.02259, over 24300.00 frames. ], tot_loss[loss=0.1218, simple_loss=0.1658, pruned_loss=0.03893, over 3752575.22 frames. ], batch size: 134, lr: 3.05e-02, grad_scale: 64.0 2024-03-15 21:10:43,516 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.self_attn_weights.pos_emb_skip_rate, batch_count=11206.666666666666, ans=0.0 2024-03-15 21:11:10,506 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=12.28 vs. limit=11.7275 2024-03-15 21:11:16,070 INFO [train_char.py:689] (0/2) Epoch 7, batch 350, loss[loss=0.1298, simple_loss=0.1858, pruned_loss=0.03684, over 24124.00 frames. ], tot_loss[loss=0.1215, simple_loss=0.1653, pruned_loss=0.03883, over 3994711.97 frames. ], batch size: 223, lr: 3.05e-02, grad_scale: 64.0 2024-03-15 21:11:31,228 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module1.balancer1.prob, batch_count=11340.0, ans=0.125 2024-03-15 21:11:34,731 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.866e+01 1.129e+02 1.284e+02 1.477e+02 2.549e+02, threshold=2.568e+02, percent-clipped=0.0 2024-03-15 21:11:47,270 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.balancer1.prob, batch_count=11373.333333333334, ans=0.125 2024-03-15 21:12:23,039 INFO [train_char.py:689] (0/2) Epoch 7, batch 400, loss[loss=0.1108, simple_loss=0.1557, pruned_loss=0.03289, over 24383.00 frames. ], tot_loss[loss=0.1213, simple_loss=0.1657, pruned_loss=0.03843, over 4183034.91 frames. ], batch size: 158, lr: 3.05e-02, grad_scale: 64.0 2024-03-15 21:12:32,058 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.ff3_skip_rate, batch_count=11473.333333333334, ans=0.00837536231884058 2024-03-15 21:12:41,375 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=4.75 vs. limit=4.726 2024-03-15 21:12:42,699 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=192, metric=9.68 vs. limit=10.753333333333334 2024-03-15 21:12:43,122 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer2.prob, batch_count=11506.666666666666, ans=0.125 2024-03-15 21:12:43,219 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=11506.666666666666, ans=0.18493333333333334 2024-03-15 21:13:06,941 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.balancer1.prob, batch_count=11573.333333333334, ans=0.125 2024-03-15 21:13:08,187 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.3.self_attn_weights, loss-sum=0.000e+00 2024-03-15 21:13:09,404 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer2.prob, batch_count=11573.333333333334, ans=0.125 2024-03-15 21:13:13,781 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.self_attn2.whiten, num_groups=1, num_channels=192, metric=15.13 vs. limit=16.18 2024-03-15 21:13:15,517 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=11606.666666666666, ans=0.125 2024-03-15 21:13:20,740 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=4.00 vs. limit=4.741 2024-03-15 21:13:25,549 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=11606.666666666666, ans=0.18393333333333334 2024-03-15 21:13:27,880 INFO [train_char.py:689] (0/2) Epoch 7, batch 450, loss[loss=0.1192, simple_loss=0.1744, pruned_loss=0.03203, over 24109.00 frames. ], tot_loss[loss=0.1229, simple_loss=0.1681, pruned_loss=0.03885, over 4329060.88 frames. ], batch size: 199, lr: 3.04e-02, grad_scale: 64.0 2024-03-15 21:13:32,947 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=11640.0, ans=0.125 2024-03-15 21:13:34,221 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer2.prob, batch_count=11640.0, ans=0.125 2024-03-15 21:13:41,767 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.attention_skip_rate, batch_count=11673.333333333334, ans=0.018027777777777775 2024-03-15 21:13:44,131 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.510e+01 1.138e+02 1.275e+02 1.402e+02 1.953e+02, threshold=2.551e+02, percent-clipped=0.0 2024-03-15 21:13:52,481 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer1.max_abs, batch_count=11706.666666666666, ans=10.0 2024-03-15 21:13:54,960 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.bypass_mid.scale_min, batch_count=11706.666666666666, ans=0.49026666666666674 2024-03-15 21:13:59,237 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=384, metric=4.04 vs. limit=11.89 2024-03-15 21:14:04,156 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=11706.666666666666, ans=0.18293333333333334 2024-03-15 21:14:09,190 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff3_skip_rate, batch_count=11740.0, ans=0.008317391304347827 2024-03-15 21:14:18,890 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module2.balancer2.min_positive, batch_count=11773.333333333334, ans=0.05 2024-03-15 21:14:20,259 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=11773.333333333334, ans=0.18226666666666666 2024-03-15 21:14:29,304 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=3.17 vs. limit=11.915 2024-03-15 21:14:31,187 INFO [train_char.py:689] (0/2) Epoch 7, batch 500, loss[loss=0.1383, simple_loss=0.1871, pruned_loss=0.04482, over 24127.00 frames. ], tot_loss[loss=0.1234, simple_loss=0.1693, pruned_loss=0.03878, over 4441818.31 frames. ], batch size: 279, lr: 3.04e-02, grad_scale: 64.0 2024-03-15 21:14:36,398 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module1.balancer2.prob, batch_count=11806.666666666666, ans=0.125 2024-03-15 21:14:40,331 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-7.pt 2024-03-15 21:15:28,130 INFO [train_char.py:689] (0/2) Epoch 8, batch 0, loss[loss=0.1321, simple_loss=0.1766, pruned_loss=0.04377, over 24232.00 frames. ], tot_loss[loss=0.1321, simple_loss=0.1766, pruned_loss=0.04377, over 24232.00 frames. ], batch size: 266, lr: 2.86e-02, grad_scale: 64.0 2024-03-15 21:15:28,131 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 21:15:41,601 INFO [train_char.py:721] (0/2) Epoch 8, validation: loss=0.08204, simple_loss=0.1358, pruned_loss=0.01415, over 657665.00 frames. 2024-03-15 21:15:41,602 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 21:16:04,282 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=11863.333333333334, ans=0.18136666666666668 2024-03-15 21:16:08,975 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=9.07 vs. limit=11.96125 2024-03-15 21:16:16,970 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=384, metric=5.65 vs. limit=11.96125 2024-03-15 21:16:39,625 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module2.balancer2.prob, batch_count=11963.333333333334, ans=0.125 2024-03-15 21:16:42,203 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.ff2_skip_rate, batch_count=11963.333333333334, ans=0.008268840579710146 2024-03-15 21:16:43,850 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=12.74 vs. limit=11.98625 2024-03-15 21:16:50,342 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_skip_rate, batch_count=11963.333333333334, ans=0.016819444444444442 2024-03-15 21:16:53,058 INFO [train_char.py:689] (0/2) Epoch 8, batch 50, loss[loss=0.08064, simple_loss=0.1259, pruned_loss=0.01771, over 24293.00 frames. ], tot_loss[loss=0.1107, simple_loss=0.1548, pruned_loss=0.03324, over 1079869.97 frames. ], batch size: 140, lr: 2.86e-02, grad_scale: 64.0 2024-03-15 21:17:01,123 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.530e+01 1.124e+02 1.230e+02 1.388e+02 1.986e+02, threshold=2.460e+02, percent-clipped=0.0 2024-03-15 21:17:01,492 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module1.balancer1.prob, batch_count=11996.666666666666, ans=0.125 2024-03-15 21:17:48,762 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=256, metric=3.50 vs. limit=12.036249999999999 2024-03-15 21:17:52,962 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=6.45 vs. limit=8.0325 2024-03-15 21:18:00,989 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.self_attn2.whiten, num_groups=1, num_channels=192, metric=14.46 vs. limit=16.5975 2024-03-15 21:18:05,215 INFO [train_char.py:689] (0/2) Epoch 8, batch 100, loss[loss=0.09351, simple_loss=0.1421, pruned_loss=0.02248, over 24306.00 frames. ], tot_loss[loss=0.1129, simple_loss=0.158, pruned_loss=0.03393, over 1906906.95 frames. ], batch size: 140, lr: 2.85e-02, grad_scale: 64.0 2024-03-15 21:18:09,397 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.bypass_mid.scale_min, batch_count=12163.333333333334, ans=0.47428333333333333 2024-03-15 21:18:32,843 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=6.85 vs. limit=8.057500000000001 2024-03-15 21:18:43,547 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=12230.0, ans=0.17769999999999997 2024-03-15 21:18:52,901 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=384, metric=17.33 vs. limit=16.697499999999998 2024-03-15 21:19:07,843 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass.skip_rate, batch_count=12296.666666666666, ans=0.04949747468305833 2024-03-15 21:19:09,134 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer2.min_positive, batch_count=12296.666666666666, ans=0.05 2024-03-15 21:19:10,523 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer1.prob, batch_count=12296.666666666666, ans=0.125 2024-03-15 21:19:14,093 INFO [train_char.py:689] (0/2) Epoch 8, batch 150, loss[loss=0.1261, simple_loss=0.1689, pruned_loss=0.04163, over 24212.00 frames. ], tot_loss[loss=0.1135, simple_loss=0.1587, pruned_loss=0.03417, over 2550211.09 frames. ], batch size: 311, lr: 2.85e-02, grad_scale: 64.0 2024-03-15 21:19:21,561 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.886e+01 1.136e+02 1.338e+02 1.602e+02 2.558e+02, threshold=2.676e+02, percent-clipped=2.0 2024-03-15 21:19:30,881 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.scale_min, batch_count=12363.333333333334, ans=0.46728333333333333 2024-03-15 21:19:40,043 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.out_combiner.scale_min, batch_count=12396.666666666666, ans=0.46611666666666673 2024-03-15 21:19:46,669 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.self_attn2.whiten, num_groups=1, num_channels=384, metric=15.55 vs. limit=16.7975 2024-03-15 21:19:50,221 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.ff2_skip_rate, batch_count=12396.666666666666, ans=0.00817463768115942 2024-03-15 21:19:51,443 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass.skip_rate, batch_count=12430.0, ans=0.035 2024-03-15 21:20:05,439 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.convnext.hidden_balancer.prob, batch_count=12463.333333333334, ans=0.125 2024-03-15 21:20:14,692 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module1.balancer1.prob, batch_count=12463.333333333334, ans=0.125 2024-03-15 21:20:18,053 INFO [train_char.py:689] (0/2) Epoch 8, batch 200, loss[loss=0.1162, simple_loss=0.1568, pruned_loss=0.03782, over 24184.00 frames. ], tot_loss[loss=0.1134, simple_loss=0.1588, pruned_loss=0.03401, over 3050056.92 frames. ], batch size: 344, lr: 2.85e-02, grad_scale: 64.0 2024-03-15 21:20:19,613 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=12496.666666666666, ans=0.125 2024-03-15 21:20:44,481 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=384, metric=5.78 vs. limit=12.19875 2024-03-15 21:21:05,122 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=12596.666666666666, ans=0.014180555555555557 2024-03-15 21:21:10,220 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=12596.666666666666, ans=0.125 2024-03-15 21:21:25,416 INFO [train_char.py:689] (0/2) Epoch 8, batch 250, loss[loss=0.1244, simple_loss=0.1724, pruned_loss=0.03822, over 24131.00 frames. ], tot_loss[loss=0.1137, simple_loss=0.1596, pruned_loss=0.03387, over 3445362.55 frames. ], batch size: 279, lr: 2.84e-02, grad_scale: 64.0 2024-03-15 21:21:33,127 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.876e+01 1.081e+02 1.224e+02 1.402e+02 2.503e+02, threshold=2.448e+02, percent-clipped=0.0 2024-03-15 21:22:04,059 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.whiten, num_groups=1, num_channels=256, metric=4.58 vs. limit=9.091999999999999 2024-03-15 21:22:32,680 INFO [train_char.py:689] (0/2) Epoch 8, batch 300, loss[loss=0.1022, simple_loss=0.1389, pruned_loss=0.03277, over 23991.00 frames. ], tot_loss[loss=0.1128, simple_loss=0.1589, pruned_loss=0.03341, over 3754967.41 frames. ], batch size: 381, lr: 2.84e-02, grad_scale: 64.0 2024-03-15 21:22:35,406 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.ff3_skip_rate, batch_count=12830.0, ans=0.008080434782608696 2024-03-15 21:23:16,970 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=12930.0, ans=0.1707 2024-03-15 21:23:32,221 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=12963.333333333334, ans=0.17036666666666667 2024-03-15 21:23:38,235 INFO [train_char.py:689] (0/2) Epoch 8, batch 350, loss[loss=0.1129, simple_loss=0.165, pruned_loss=0.03038, over 24156.00 frames. ], tot_loss[loss=0.1126, simple_loss=0.159, pruned_loss=0.03315, over 3995585.93 frames. ], batch size: 188, lr: 2.83e-02, grad_scale: 64.0 2024-03-15 21:23:45,776 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.133e+01 1.142e+02 1.443e+02 1.715e+02 2.690e+02, threshold=2.886e+02, percent-clipped=3.0 2024-03-15 21:24:17,616 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=192, metric=8.98 vs. limit=11.548333333333332 2024-03-15 21:24:21,960 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=13096.666666666666, ans=0.125 2024-03-15 21:24:31,187 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=12.14 vs. limit=12.42375 2024-03-15 21:24:41,464 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=13130.0, ans=0.1687 2024-03-15 21:24:44,913 INFO [train_char.py:689] (0/2) Epoch 8, batch 400, loss[loss=0.1151, simple_loss=0.1555, pruned_loss=0.03732, over 24266.00 frames. ], tot_loss[loss=0.1134, simple_loss=0.16, pruned_loss=0.0334, over 4182603.63 frames. ], batch size: 328, lr: 2.83e-02, grad_scale: 64.0 2024-03-15 21:24:52,018 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=12.42 vs. limit=12.436250000000001 2024-03-15 21:24:59,123 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=13196.666666666666, ans=0.16803333333333334 2024-03-15 21:25:07,807 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=13196.666666666666, ans=0.16803333333333334 2024-03-15 21:25:14,105 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-15 21:25:17,686 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.balancer2.prob, batch_count=13230.0, ans=0.125 2024-03-15 21:25:21,492 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=13230.0, ans=0.125 2024-03-15 21:25:21,537 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer2.prob, batch_count=13230.0, ans=0.125 2024-03-15 21:25:37,504 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer1.prob, batch_count=13296.666666666666, ans=0.125 2024-03-15 21:25:46,308 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.scale_min, batch_count=13296.666666666666, ans=0.4346166666666667 2024-03-15 21:25:50,061 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/checkpoint-4000.pt 2024-03-15 21:25:52,708 INFO [train_char.py:689] (0/2) Epoch 8, batch 450, loss[loss=0.1307, simple_loss=0.1851, pruned_loss=0.03811, over 24100.00 frames. ], tot_loss[loss=0.1142, simple_loss=0.161, pruned_loss=0.03372, over 4328801.93 frames. ], batch size: 236, lr: 2.83e-02, grad_scale: 64.0 2024-03-15 21:26:00,097 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.686e+01 1.193e+02 1.318e+02 1.623e+02 2.694e+02, threshold=2.637e+02, percent-clipped=0.0 2024-03-15 21:26:17,707 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module2.balancer1.prob, batch_count=13396.666666666666, ans=0.125 2024-03-15 21:26:26,321 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.attention_skip_rate, batch_count=13396.666666666666, ans=0.010847222222222223 2024-03-15 21:26:26,402 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.self_attn_weights.pos_emb_skip_rate, batch_count=13396.666666666666, ans=0.0 2024-03-15 21:26:27,692 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer2.prob, batch_count=13396.666666666666, ans=0.125 2024-03-15 21:26:35,158 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module2.balancer2.prob, batch_count=13430.0, ans=0.125 2024-03-15 21:26:56,755 INFO [train_char.py:689] (0/2) Epoch 8, batch 500, loss[loss=0.1191, simple_loss=0.1615, pruned_loss=0.03834, over 24287.00 frames. ], tot_loss[loss=0.1159, simple_loss=0.1633, pruned_loss=0.03427, over 4439820.68 frames. ], batch size: 328, lr: 2.82e-02, grad_scale: 64.0 2024-03-15 21:26:58,157 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer1.prob, batch_count=13496.666666666666, ans=0.125 2024-03-15 21:27:03,434 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass_mid.scale_min, batch_count=13496.666666666666, ans=0.4276166666666667 2024-03-15 21:27:05,750 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-8.pt 2024-03-15 21:27:51,731 INFO [train_char.py:689] (0/2) Epoch 9, batch 0, loss[loss=0.107, simple_loss=0.1499, pruned_loss=0.03202, over 24208.00 frames. ], tot_loss[loss=0.107, simple_loss=0.1499, pruned_loss=0.03202, over 24208.00 frames. ], batch size: 311, lr: 2.67e-02, grad_scale: 64.0 2024-03-15 21:27:51,731 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 21:28:04,360 INFO [train_char.py:721] (0/2) Epoch 9, validation: loss=0.07904, simple_loss=0.134, pruned_loss=0.01202, over 657665.00 frames. 2024-03-15 21:28:04,361 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 21:28:24,452 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_skip_rate, batch_count=13553.333333333334, ans=0.010194444444444443 2024-03-15 21:28:41,755 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.balancer2.prob, batch_count=13586.666666666666, ans=0.125 2024-03-15 21:28:41,781 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=13586.666666666666, ans=0.125 2024-03-15 21:28:46,985 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-15 21:28:59,725 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=13653.333333333334, ans=0.125 2024-03-15 21:29:11,360 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 9.371e+01 1.148e+02 1.267e+02 1.499e+02 2.749e+02, threshold=2.533e+02, percent-clipped=1.0 2024-03-15 21:29:12,754 INFO [train_char.py:689] (0/2) Epoch 9, batch 50, loss[loss=0.1049, simple_loss=0.1546, pruned_loss=0.02761, over 24227.00 frames. ], tot_loss[loss=0.104, simple_loss=0.1485, pruned_loss=0.02974, over 1082214.54 frames. ], batch size: 116, lr: 2.67e-02, grad_scale: 64.0 2024-03-15 21:29:17,703 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=512, metric=19.00 vs. limit=17.765 2024-03-15 21:29:22,345 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.self_attn_weights.pos_emb_skip_rate, batch_count=13686.666666666666, ans=0.0 2024-03-15 21:29:24,942 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer1.min_positive, batch_count=13720.0, ans=0.025 2024-03-15 21:29:27,557 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-15 21:29:48,817 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=13753.333333333334, ans=0.125 2024-03-15 21:30:01,803 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer1.prob, batch_count=13786.666666666666, ans=0.125 2024-03-15 21:30:12,254 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.ff3_skip_rate, batch_count=13820.0, ans=0.007865217391304347 2024-03-15 21:30:17,341 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass.skip_rate, batch_count=13820.0, ans=0.07 2024-03-15 21:30:22,102 INFO [train_char.py:689] (0/2) Epoch 9, batch 100, loss[loss=0.1028, simple_loss=0.1426, pruned_loss=0.03146, over 24139.00 frames. ], tot_loss[loss=0.1061, simple_loss=0.1517, pruned_loss=0.03027, over 1911179.57 frames. ], batch size: 362, lr: 2.66e-02, grad_scale: 64.0 2024-03-15 21:30:32,593 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module1.balancer2.min_positive, batch_count=13853.333333333334, ans=0.05 2024-03-15 21:30:59,635 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=13953.333333333334, ans=0.16046666666666667 2024-03-15 21:31:00,940 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=13953.333333333334, ans=0.16046666666666667 2024-03-15 21:31:03,850 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=12.58 vs. limit=12.7325 2024-03-15 21:31:23,769 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer1.max_abs, batch_count=13986.666666666666, ans=10.0 2024-03-15 21:31:25,102 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff2_skip_rate, batch_count=13986.666666666666, ans=0.007828985507246377 2024-03-15 21:31:28,546 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.163e+01 1.095e+02 1.330e+02 1.584e+02 2.657e+02, threshold=2.661e+02, percent-clipped=2.0 2024-03-15 21:31:29,875 INFO [train_char.py:689] (0/2) Epoch 9, batch 150, loss[loss=0.0921, simple_loss=0.1342, pruned_loss=0.02503, over 24127.00 frames. ], tot_loss[loss=0.1054, simple_loss=0.1514, pruned_loss=0.02972, over 2555190.02 frames. ], batch size: 362, lr: 2.66e-02, grad_scale: 64.0 2024-03-15 21:31:46,060 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer2.min_abs, batch_count=14053.333333333334, ans=0.4108 2024-03-15 21:32:06,082 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=11.64 vs. limit=12.782499999999999 2024-03-15 21:32:33,646 INFO [train_char.py:689] (0/2) Epoch 9, batch 200, loss[loss=0.1132, simple_loss=0.1655, pruned_loss=0.03043, over 24315.00 frames. ], tot_loss[loss=0.1055, simple_loss=0.152, pruned_loss=0.02947, over 3057908.66 frames. ], batch size: 180, lr: 2.66e-02, grad_scale: 64.0 2024-03-15 21:32:47,239 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.balancer1.prob, batch_count=14186.666666666666, ans=0.125 2024-03-15 21:32:47,333 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module2.balancer2.prob, batch_count=14186.666666666666, ans=0.125 2024-03-15 21:32:51,591 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=384, metric=17.52 vs. limit=18.165 2024-03-15 21:32:52,257 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=14220.0, ans=0.125 2024-03-15 21:32:57,412 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.balancer1.prob, batch_count=14220.0, ans=0.125 2024-03-15 21:33:00,693 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=9.61 vs. limit=12.8325 2024-03-15 21:33:06,274 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff3_skip_rate, batch_count=14253.333333333334, ans=0.007771014492753623 2024-03-15 21:33:10,063 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer1.prob, batch_count=14253.333333333334, ans=0.125 2024-03-15 21:33:21,599 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=14286.666666666666, ans=0.15713333333333335 2024-03-15 21:33:25,464 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=14286.666666666666, ans=0.007138888888888889 2024-03-15 21:33:26,837 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=14320.0, ans=0.15680000000000002 2024-03-15 21:33:39,560 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.662e+01 1.181e+02 1.353e+02 1.596e+02 2.674e+02, threshold=2.706e+02, percent-clipped=1.0 2024-03-15 21:33:40,903 INFO [train_char.py:689] (0/2) Epoch 9, batch 250, loss[loss=0.1013, simple_loss=0.1486, pruned_loss=0.02699, over 24364.00 frames. ], tot_loss[loss=0.1056, simple_loss=0.1525, pruned_loss=0.02937, over 3440221.52 frames. ], batch size: 152, lr: 2.65e-02, grad_scale: 64.0 2024-03-15 21:33:49,875 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer2.prob, batch_count=14353.333333333334, ans=0.125 2024-03-15 21:33:56,314 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.bypass_mid.scale_min, batch_count=14386.666666666666, ans=0.39646666666666674 2024-03-15 21:33:58,998 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=14386.666666666666, ans=0.15613333333333335 2024-03-15 21:34:02,652 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=14386.666666666666, ans=0.00672222222222222 2024-03-15 21:34:25,811 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-15 21:34:46,657 INFO [train_char.py:689] (0/2) Epoch 9, batch 300, loss[loss=0.0954, simple_loss=0.141, pruned_loss=0.02492, over 24372.00 frames. ], tot_loss[loss=0.1064, simple_loss=0.1539, pruned_loss=0.02946, over 3750383.97 frames. ], batch size: 158, lr: 2.65e-02, grad_scale: 64.0 2024-03-15 21:34:49,936 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=9.39 vs. limit=12.945 2024-03-15 21:34:50,555 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.ff2_skip_rate, batch_count=14520.0, ans=0.007713043478260869 2024-03-15 21:35:27,545 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.skip_rate, batch_count=14620.0, ans=0.07 2024-03-15 21:35:30,102 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer2.prob, batch_count=14620.0, ans=0.125 2024-03-15 21:35:40,232 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.balancer.min_positive, batch_count=14653.333333333334, ans=0.10346666666666665 2024-03-15 21:35:48,150 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.balancer2.prob, batch_count=14653.333333333334, ans=0.125 2024-03-15 21:35:49,308 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.ff2_skip_rate, batch_count=14653.333333333334, ans=0.007684057971014492 2024-03-15 21:35:51,565 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.453e+01 1.119e+02 1.274e+02 1.597e+02 2.203e+02, threshold=2.548e+02, percent-clipped=0.0 2024-03-15 21:35:52,845 INFO [train_char.py:689] (0/2) Epoch 9, batch 350, loss[loss=0.09118, simple_loss=0.1388, pruned_loss=0.02177, over 24414.00 frames. ], tot_loss[loss=0.1066, simple_loss=0.1543, pruned_loss=0.02942, over 3992831.92 frames. ], batch size: 158, lr: 2.65e-02, grad_scale: 64.0 2024-03-15 21:35:55,643 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=14686.666666666666, ans=0.15313333333333334 2024-03-15 21:36:18,894 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=14753.333333333334, ans=0.15246666666666667 2024-03-15 21:36:26,447 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer1.prob, batch_count=14753.333333333334, ans=0.125 2024-03-15 21:36:34,211 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=14786.666666666666, ans=0.125 2024-03-15 21:36:44,146 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=14820.0, ans=0.15180000000000002 2024-03-15 21:36:57,659 INFO [train_char.py:689] (0/2) Epoch 9, batch 400, loss[loss=0.1066, simple_loss=0.1588, pruned_loss=0.02719, over 24263.00 frames. ], tot_loss[loss=0.1065, simple_loss=0.1547, pruned_loss=0.02919, over 4181146.71 frames. ], batch size: 296, lr: 2.64e-02, grad_scale: 64.0 2024-03-15 21:37:00,993 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer2.prob, batch_count=14853.333333333334, ans=0.125 2024-03-15 21:37:10,491 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.conv_module2.whiten, num_groups=1, num_channels=192, metric=4.53 vs. limit=13.07 2024-03-15 21:37:11,052 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass.scale_min, batch_count=14886.666666666666, ans=0.3789666666666667 2024-03-15 21:37:14,014 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=4.39 vs. limit=5.2330000000000005 2024-03-15 21:37:23,513 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_skip_rate, batch_count=14920.0, ans=0.004500000000000004 2024-03-15 21:37:24,850 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=14920.0, ans=0.15080000000000002 2024-03-15 21:37:29,894 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.balancer1.prob, batch_count=14920.0, ans=0.125 2024-03-15 21:37:45,803 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=256, metric=4.22 vs. limit=13.1075 2024-03-15 21:37:57,478 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass_mid.scale_min, batch_count=14986.666666666666, ans=0.3754666666666667 2024-03-15 21:38:02,044 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.080e+01 1.138e+02 1.304e+02 1.631e+02 3.170e+02, threshold=2.609e+02, percent-clipped=2.0 2024-03-15 21:38:03,332 INFO [train_char.py:689] (0/2) Epoch 9, batch 450, loss[loss=0.1126, simple_loss=0.1612, pruned_loss=0.032, over 24348.00 frames. ], tot_loss[loss=0.1077, simple_loss=0.1564, pruned_loss=0.02944, over 4328861.38 frames. ], batch size: 297, lr: 2.64e-02, grad_scale: 64.0 2024-03-15 21:38:06,042 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.attention_skip_rate, batch_count=15020.0, ans=0.004083333333333335 2024-03-15 21:38:12,081 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-15 21:38:16,845 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.attention_skip_rate, batch_count=15053.333333333334, ans=0.003944444444444438 2024-03-15 21:38:28,223 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=15086.666666666666, ans=0.14913333333333334 2024-03-15 21:38:28,320 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.2.self_attn_weights, loss-sum=0.000e+00 2024-03-15 21:38:29,475 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=15086.666666666666, ans=0.14913333333333334 2024-03-15 21:38:34,361 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass_mid.scale_min, batch_count=15086.666666666666, ans=0.37196666666666667 2024-03-15 21:39:00,795 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=384, metric=16.36 vs. limit=18.865000000000002 2024-03-15 21:39:08,078 INFO [train_char.py:689] (0/2) Epoch 9, batch 500, loss[loss=0.1201, simple_loss=0.1731, pruned_loss=0.03353, over 24037.00 frames. ], tot_loss[loss=0.1099, simple_loss=0.1594, pruned_loss=0.03018, over 4439981.73 frames. ], batch size: 236, lr: 2.63e-02, grad_scale: 64.0 2024-03-15 21:39:10,827 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.balancer2.prob, batch_count=15186.666666666666, ans=0.125 2024-03-15 21:39:16,675 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-9.pt 2024-03-15 21:40:06,715 INFO [train_char.py:689] (0/2) Epoch 10, batch 0, loss[loss=0.1092, simple_loss=0.1613, pruned_loss=0.02852, over 24209.00 frames. ], tot_loss[loss=0.1092, simple_loss=0.1613, pruned_loss=0.02852, over 24209.00 frames. ], batch size: 212, lr: 2.50e-02, grad_scale: 64.0 2024-03-15 21:40:06,716 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 21:40:19,993 INFO [train_char.py:721] (0/2) Epoch 10, validation: loss=0.07731, simple_loss=0.1327, pruned_loss=0.01097, over 657665.00 frames. 2024-03-15 21:40:19,994 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 21:40:56,022 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=15276.666666666666, ans=0.0 2024-03-15 21:41:01,247 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module1.balancer1.prob, batch_count=15310.0, ans=0.125 2024-03-15 21:41:18,775 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 1.020e+02 1.181e+02 1.394e+02 1.702e+02 3.976e+02, threshold=2.787e+02, percent-clipped=2.0 2024-03-15 21:41:19,595 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=12.14 vs. limit=13.25375 2024-03-15 21:41:29,307 INFO [train_char.py:689] (0/2) Epoch 10, batch 50, loss[loss=0.07896, simple_loss=0.1271, pruned_loss=0.01543, over 24334.00 frames. ], tot_loss[loss=0.1013, simple_loss=0.1498, pruned_loss=0.02639, over 1082486.34 frames. ], batch size: 129, lr: 2.50e-02, grad_scale: 64.0 2024-03-15 21:42:18,416 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=6.50 vs. limit=5.3215 2024-03-15 21:42:23,071 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=15476.666666666666, ans=0.14523333333333333 2024-03-15 21:42:30,146 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=4.28 vs. limit=5.326499999999999 2024-03-15 21:42:42,177 INFO [train_char.py:689] (0/2) Epoch 10, batch 100, loss[loss=0.08628, simple_loss=0.136, pruned_loss=0.01827, over 24394.00 frames. ], tot_loss[loss=0.1009, simple_loss=0.1499, pruned_loss=0.02598, over 1911611.39 frames. ], batch size: 152, lr: 2.50e-02, grad_scale: 64.0 2024-03-15 21:42:51,397 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=15543.333333333334, ans=0.125 2024-03-15 21:43:35,745 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.673e+01 1.133e+02 1.390e+02 1.768e+02 3.155e+02, threshold=2.779e+02, percent-clipped=3.0 2024-03-15 21:43:46,327 INFO [train_char.py:689] (0/2) Epoch 10, batch 150, loss[loss=0.08672, simple_loss=0.1368, pruned_loss=0.01833, over 24369.00 frames. ], tot_loss[loss=0.1013, simple_loss=0.15, pruned_loss=0.02633, over 2557478.38 frames. ], batch size: 135, lr: 2.49e-02, grad_scale: 64.0 2024-03-15 21:43:53,095 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_skip_rate, batch_count=15710.0, ans=0.001208333333333339 2024-03-15 21:44:05,687 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.balancer2.prob, batch_count=15743.333333333334, ans=0.125 2024-03-15 21:44:36,004 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.ff2_skip_rate, batch_count=15843.333333333334, ans=0.00742536231884058 2024-03-15 21:44:36,089 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.2.self_attn_weights, loss-sum=0.000e+00 2024-03-15 21:44:52,410 INFO [train_char.py:689] (0/2) Epoch 10, batch 200, loss[loss=0.09204, simple_loss=0.1449, pruned_loss=0.01958, over 24404.00 frames. ], tot_loss[loss=0.1027, simple_loss=0.1518, pruned_loss=0.02678, over 3062133.87 frames. ], batch size: 158, lr: 2.49e-02, grad_scale: 64.0 2024-03-15 21:45:12,959 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.balancer1.prob, batch_count=15910.0, ans=0.125 2024-03-15 21:45:18,110 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module2.balancer2.prob, batch_count=15943.333333333334, ans=0.125 2024-03-15 21:45:18,139 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module1.balancer1.prob, batch_count=15943.333333333334, ans=0.125 2024-03-15 21:45:35,336 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.attention_skip_rate, batch_count=15976.666666666666, ans=9.722222222222077e-05 2024-03-15 21:45:37,890 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_skip_rate, batch_count=15976.666666666666, ans=9.722222222222077e-05 2024-03-15 21:45:45,993 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=16010.0, ans=0.13990000000000002 2024-03-15 21:45:50,705 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.242e+01 1.189e+02 1.412e+02 1.879e+02 4.191e+02, threshold=2.824e+02, percent-clipped=6.0 2024-03-15 21:45:56,058 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=16010.0, ans=0.125 2024-03-15 21:45:59,534 INFO [train_char.py:689] (0/2) Epoch 10, batch 250, loss[loss=0.1087, simple_loss=0.1584, pruned_loss=0.02954, over 24209.00 frames. ], tot_loss[loss=0.1032, simple_loss=0.1522, pruned_loss=0.02712, over 3453061.64 frames. ], batch size: 311, lr: 2.49e-02, grad_scale: 32.0 2024-03-15 21:45:59,833 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.balancer1.prob, batch_count=16043.333333333334, ans=0.125 2024-03-15 21:46:09,644 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.conv.8.prob, batch_count=16043.333333333334, ans=0.125 2024-03-15 21:46:10,956 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.scale_min, batch_count=16076.666666666666, ans=0.3373166666666667 2024-03-15 21:46:13,795 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=384, metric=16.33 vs. limit=19.557499999999997 2024-03-15 21:46:31,760 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_skip_rate, batch_count=16110.0, ans=0.0 2024-03-15 21:46:39,285 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=16143.333333333334, ans=0.125 2024-03-15 21:47:01,802 INFO [train_char.py:689] (0/2) Epoch 10, batch 300, loss[loss=0.09973, simple_loss=0.1526, pruned_loss=0.02343, over 24370.00 frames. ], tot_loss[loss=0.1041, simple_loss=0.1536, pruned_loss=0.02731, over 3752345.34 frames. ], batch size: 172, lr: 2.48e-02, grad_scale: 32.0 2024-03-15 21:47:02,000 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer1.prob, batch_count=16210.0, ans=0.125 2024-03-15 21:47:02,504 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.49 vs. limit=5.4315 2024-03-15 21:47:11,494 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.whiten, num_groups=1, num_channels=512, metric=7.64 vs. limit=10.484 2024-03-15 21:47:25,637 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=14.41 vs. limit=19.682499999999997 2024-03-15 21:47:35,416 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_skip_rate, batch_count=16276.666666666666, ans=0.0 2024-03-15 21:47:43,184 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.ff3_skip_rate, batch_count=16310.0, ans=0.0073239130434782605 2024-03-15 21:47:50,081 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=12.51 vs. limit=13.61625 2024-03-15 21:47:59,221 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=16343.333333333334, ans=0.13656666666666667 2024-03-15 21:48:02,751 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.345e+01 1.095e+02 1.275e+02 1.585e+02 3.583e+02, threshold=2.551e+02, percent-clipped=3.0 2024-03-15 21:48:06,864 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=16343.333333333334, ans=0.13656666666666667 2024-03-15 21:48:08,630 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=8.44 vs. limit=9.085833333333333 2024-03-15 21:48:11,643 INFO [train_char.py:689] (0/2) Epoch 10, batch 350, loss[loss=0.1277, simple_loss=0.1821, pruned_loss=0.03669, over 24213.00 frames. ], tot_loss[loss=0.104, simple_loss=0.1538, pruned_loss=0.02707, over 3993721.94 frames. ], batch size: 224, lr: 2.48e-02, grad_scale: 32.0 2024-03-15 21:48:35,376 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module2.balancer2.min_abs, batch_count=16443.333333333332, ans=0.44665 2024-03-15 21:48:49,179 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.nonlin_attention.balancer.prob, batch_count=16476.666666666668, ans=0.125 2024-03-15 21:49:02,415 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=4.60 vs. limit=5.4765 2024-03-15 21:49:16,149 INFO [train_char.py:689] (0/2) Epoch 10, batch 400, loss[loss=0.08107, simple_loss=0.1295, pruned_loss=0.01632, over 24305.00 frames. ], tot_loss[loss=0.1036, simple_loss=0.1535, pruned_loss=0.02684, over 4181669.16 frames. ], batch size: 146, lr: 2.47e-02, grad_scale: 32.0 2024-03-15 21:49:37,156 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.nonlin_attention.balancer.prob, batch_count=16576.666666666668, ans=0.125 2024-03-15 21:49:49,771 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.nonlin_attention.balancer.prob, batch_count=16610.0, ans=0.125 2024-03-15 21:49:51,008 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer1.prob, batch_count=16610.0, ans=0.125 2024-03-15 21:50:12,483 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 9.049e+01 1.160e+02 1.425e+02 1.732e+02 3.861e+02, threshold=2.849e+02, percent-clipped=2.0 2024-03-15 21:50:15,371 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=16676.666666666668, ans=0.13323333333333331 2024-03-15 21:50:22,492 INFO [train_char.py:689] (0/2) Epoch 10, batch 450, loss[loss=0.09189, simple_loss=0.1457, pruned_loss=0.01906, over 24362.00 frames. ], tot_loss[loss=0.1039, simple_loss=0.1539, pruned_loss=0.027, over 4327843.81 frames. ], batch size: 172, lr: 2.47e-02, grad_scale: 32.0 2024-03-15 21:50:33,862 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=16743.333333333332, ans=0.125 2024-03-15 21:50:38,776 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=16743.333333333332, ans=0.125 2024-03-15 21:50:43,025 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.10 vs. limit=5.5115 2024-03-15 21:50:46,940 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=6.73 vs. limit=13.778749999999999 2024-03-15 21:51:06,722 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward3.hidden_balancer.prob, batch_count=16810.0, ans=0.125 2024-03-15 21:51:13,019 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=16843.333333333332, ans=0.125 2024-03-15 21:51:25,005 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass_mid.scale_min, batch_count=16843.333333333332, ans=0.31048333333333344 2024-03-15 21:51:27,131 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=17.03 vs. limit=13.82875 2024-03-15 21:51:27,817 INFO [train_char.py:689] (0/2) Epoch 10, batch 500, loss[loss=0.1115, simple_loss=0.1667, pruned_loss=0.02817, over 24121.00 frames. ], tot_loss[loss=0.1055, simple_loss=0.1562, pruned_loss=0.02737, over 4440152.51 frames. ], batch size: 251, lr: 2.47e-02, grad_scale: 32.0 2024-03-15 21:51:37,008 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-10.pt 2024-03-15 21:52:28,938 INFO [train_char.py:689] (0/2) Epoch 11, batch 0, loss[loss=0.09789, simple_loss=0.1469, pruned_loss=0.02444, over 24293.00 frames. ], tot_loss[loss=0.09789, simple_loss=0.1469, pruned_loss=0.02444, over 24293.00 frames. ], batch size: 172, lr: 2.35e-02, grad_scale: 32.0 2024-03-15 21:52:28,939 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 21:52:42,715 INFO [train_char.py:721] (0/2) Epoch 11, validation: loss=0.07449, simple_loss=0.1293, pruned_loss=0.009864, over 657665.00 frames. 2024-03-15 21:52:42,716 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 21:52:49,976 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.conv.5.prob, batch_count=16900.0, ans=0.125 2024-03-15 21:53:10,041 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module1.balancer2.prob, batch_count=16966.666666666668, ans=0.125 2024-03-15 21:53:13,923 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=16966.666666666668, ans=0.1303333333333333 2024-03-15 21:53:22,081 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=17000.0, ans=0.13 2024-03-15 21:53:23,532 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass.scale_min, batch_count=17000.0, ans=0.30500000000000005 2024-03-15 21:53:24,979 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module2.balancer2.min_abs, batch_count=17000.0, ans=0.455 2024-03-15 21:53:31,279 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.751e+01 1.149e+02 1.354e+02 1.675e+02 2.945e+02, threshold=2.709e+02, percent-clipped=2.0 2024-03-15 21:53:31,778 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module1.balancer1.prob, batch_count=17000.0, ans=0.125 2024-03-15 21:53:34,469 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer2.prob, batch_count=17000.0, ans=0.125 2024-03-15 21:53:35,043 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=512, metric=19.20 vs. limit=20.25 2024-03-15 21:53:40,309 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=512, metric=6.26 vs. limit=13.8875 2024-03-15 21:53:50,569 INFO [train_char.py:689] (0/2) Epoch 11, batch 50, loss[loss=0.09284, simple_loss=0.1376, pruned_loss=0.02404, over 24168.00 frames. ], tot_loss[loss=0.09302, simple_loss=0.1405, pruned_loss=0.02275, over 1088263.58 frames. ], batch size: 363, lr: 2.35e-02, grad_scale: 32.0 2024-03-15 21:54:12,061 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.2.self_attn_weights, loss-sum=0.000e+00 2024-03-15 21:54:17,448 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_skip_rate, batch_count=17100.0, ans=0.0 2024-03-15 21:54:26,683 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward2.hidden_balancer.prob, batch_count=17133.333333333332, ans=0.125 2024-03-15 21:54:56,231 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.balancer2.prob, batch_count=17200.0, ans=0.125 2024-03-15 21:54:56,239 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=17200.0, ans=0.128 2024-03-15 21:55:05,046 INFO [train_char.py:689] (0/2) Epoch 11, batch 100, loss[loss=0.08061, simple_loss=0.1228, pruned_loss=0.01921, over 23985.00 frames. ], tot_loss[loss=0.09313, simple_loss=0.1414, pruned_loss=0.02244, over 1914331.93 frames. ], batch size: 381, lr: 2.35e-02, grad_scale: 32.0 2024-03-15 21:55:05,668 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=6.95 vs. limit=9.308333333333334 2024-03-15 21:55:13,074 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_skip_rate, batch_count=17233.333333333332, ans=0.0 2024-03-15 21:55:26,433 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.self_attn_weights.whiten_keys, num_groups=8, num_channels=256, metric=5.63 vs. limit=5.59 2024-03-15 21:55:29,860 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward2.hidden_balancer.prob, batch_count=17300.0, ans=0.125 2024-03-15 21:55:31,123 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-15 21:55:32,329 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass.skip_rate, batch_count=17300.0, ans=0.07 2024-03-15 21:55:37,364 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-15 21:55:51,848 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.443e+01 1.069e+02 1.293e+02 1.619e+02 2.642e+02, threshold=2.585e+02, percent-clipped=0.0 2024-03-15 21:55:53,546 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.ff2_skip_rate, batch_count=17333.333333333332, ans=0.007101449275362319 2024-03-15 21:55:56,242 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module1.balancer2.prob, batch_count=17366.666666666668, ans=0.125 2024-03-15 21:55:57,998 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=512, metric=3.63 vs. limit=14.012500000000001 2024-03-15 21:56:10,417 INFO [train_char.py:689] (0/2) Epoch 11, batch 150, loss[loss=0.09407, simple_loss=0.1324, pruned_loss=0.02786, over 23984.00 frames. ], tot_loss[loss=0.09409, simple_loss=0.1431, pruned_loss=0.02252, over 2561828.63 frames. ], batch size: 407, lr: 2.34e-02, grad_scale: 32.0 2024-03-15 21:56:21,508 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.self_attn_weights.whiten_keys, num_groups=8, num_channels=256, metric=5.56 vs. limit=5.609999999999999 2024-03-15 21:56:24,883 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-15 21:56:34,136 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.ff3_skip_rate, batch_count=17433.333333333332, ans=0.0070797101449275365 2024-03-15 21:56:55,024 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.nonlin_attention.balancer.prob, batch_count=17500.0, ans=0.125 2024-03-15 21:57:02,741 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.max_abs, batch_count=17500.0, ans=10.0 2024-03-15 21:57:06,604 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff3_skip_rate, batch_count=17533.333333333332, ans=0.007057971014492754 2024-03-15 21:57:22,224 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.hidden_balancer.prob, batch_count=17533.333333333332, ans=0.125 2024-03-15 21:57:24,577 INFO [train_char.py:689] (0/2) Epoch 11, batch 200, loss[loss=0.1018, simple_loss=0.1494, pruned_loss=0.02708, over 21684.00 frames. ], tot_loss[loss=0.09346, simple_loss=0.1415, pruned_loss=0.02269, over 3060206.61 frames. ], batch size: 86, lr: 2.34e-02, grad_scale: 32.0 2024-03-15 21:58:05,846 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer1.prob, batch_count=17666.666666666668, ans=0.125 2024-03-15 21:58:10,561 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.157e+01 1.128e+02 1.430e+02 1.869e+02 3.901e+02, threshold=2.860e+02, percent-clipped=2.0 2024-03-15 21:58:17,506 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=17700.0, ans=0.123 2024-03-15 21:58:28,763 INFO [train_char.py:689] (0/2) Epoch 11, batch 250, loss[loss=0.1166, simple_loss=0.1695, pruned_loss=0.03186, over 24091.00 frames. ], tot_loss[loss=0.09516, simple_loss=0.1437, pruned_loss=0.0233, over 3451271.45 frames. ], batch size: 279, lr: 2.34e-02, grad_scale: 32.0 2024-03-15 21:59:08,159 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module2.balancer2.prob, batch_count=17833.333333333332, ans=0.125 2024-03-15 21:59:10,666 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module1.balancer2.prob, batch_count=17833.333333333332, ans=0.125 2024-03-15 21:59:22,138 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.17 vs. limit=5.68 2024-03-15 21:59:33,011 INFO [train_char.py:689] (0/2) Epoch 11, batch 300, loss[loss=0.08775, simple_loss=0.1348, pruned_loss=0.02036, over 24371.00 frames. ], tot_loss[loss=0.09563, simple_loss=0.1444, pruned_loss=0.02342, over 3752177.55 frames. ], batch size: 158, lr: 2.33e-02, grad_scale: 32.0 2024-03-15 21:59:41,350 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_skip_rate, batch_count=17900.0, ans=0.0 2024-03-15 21:59:42,500 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=17900.0, ans=0.12100000000000002 2024-03-15 21:59:43,902 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-15 21:59:48,729 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=384, metric=9.99 vs. limit=14.2125 2024-03-15 22:00:02,510 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.self_attn_weights.whiten_keys, num_groups=8, num_channels=256, metric=5.52 vs. limit=5.6899999999999995 2024-03-15 22:00:07,173 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=17966.666666666668, ans=0.125 2024-03-15 22:00:12,312 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=17966.666666666668, ans=0.12033333333333332 2024-03-15 22:00:21,520 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=18000.0, ans=0.12000000000000002 2024-03-15 22:00:25,143 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.582e+01 1.169e+02 1.362e+02 1.542e+02 2.807e+02, threshold=2.725e+02, percent-clipped=0.0 2024-03-15 22:00:43,047 INFO [train_char.py:689] (0/2) Epoch 11, batch 350, loss[loss=0.08948, simple_loss=0.1313, pruned_loss=0.02383, over 24150.00 frames. ], tot_loss[loss=0.09812, simple_loss=0.1479, pruned_loss=0.02415, over 3981778.68 frames. ], batch size: 362, lr: 2.33e-02, grad_scale: 32.0 2024-03-15 22:01:19,002 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module2.balancer2.prob, batch_count=18133.333333333332, ans=0.125 2024-03-15 22:01:25,498 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=18166.666666666668, ans=0.125 2024-03-15 22:01:35,907 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=11.17 vs. limit=14.325 2024-03-15 22:01:36,751 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module1.balancer2.prob, batch_count=18200.0, ans=0.125 2024-03-15 22:01:44,951 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=18200.0, ans=0.11800000000000002 2024-03-15 22:01:49,363 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=14.60 vs. limit=14.325 2024-03-15 22:01:50,961 INFO [train_char.py:689] (0/2) Epoch 11, batch 400, loss[loss=0.09933, simple_loss=0.1459, pruned_loss=0.02638, over 24216.00 frames. ], tot_loss[loss=0.09889, simple_loss=0.1494, pruned_loss=0.02421, over 4173318.96 frames. ], batch size: 328, lr: 2.32e-02, grad_scale: 32.0 2024-03-15 22:01:56,153 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=18233.333333333332, ans=0.125 2024-03-15 22:02:05,251 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-15 22:02:20,662 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.bypass.scale_min, batch_count=18300.0, ans=0.25950000000000006 2024-03-15 22:02:36,415 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=4.93 vs. limit=5.75 2024-03-15 22:02:36,910 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 9.007e+01 1.120e+02 1.360e+02 1.680e+02 3.491e+02, threshold=2.721e+02, percent-clipped=4.0 2024-03-15 22:02:57,080 INFO [train_char.py:689] (0/2) Epoch 11, batch 450, loss[loss=0.09757, simple_loss=0.1566, pruned_loss=0.01925, over 24370.00 frames. ], tot_loss[loss=0.09972, simple_loss=0.1507, pruned_loss=0.02439, over 4320764.14 frames. ], batch size: 172, lr: 2.32e-02, grad_scale: 32.0 2024-03-15 22:03:17,892 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.whiten, num_groups=1, num_channels=384, metric=3.63 vs. limit=11.373333333333333 2024-03-15 22:03:20,553 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=13.05 vs. limit=14.4125 2024-03-15 22:03:30,258 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward2.hidden_balancer.prob, batch_count=18466.666666666668, ans=0.125 2024-03-15 22:04:01,547 INFO [train_char.py:689] (0/2) Epoch 11, batch 500, loss[loss=0.1057, simple_loss=0.1578, pruned_loss=0.02677, over 24199.00 frames. ], tot_loss[loss=0.1011, simple_loss=0.1531, pruned_loss=0.02459, over 4433648.02 frames. ], batch size: 212, lr: 2.32e-02, grad_scale: 32.0 2024-03-15 22:04:10,776 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-11.pt 2024-03-15 22:05:03,093 INFO [train_char.py:689] (0/2) Epoch 12, batch 0, loss[loss=0.1034, simple_loss=0.1558, pruned_loss=0.02551, over 24222.00 frames. ], tot_loss[loss=0.1034, simple_loss=0.1558, pruned_loss=0.02551, over 24222.00 frames. ], batch size: 212, lr: 2.22e-02, grad_scale: 32.0 2024-03-15 22:05:03,094 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 22:05:11,881 INFO [zipformer.py:1858] (0/2) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.2977, 5.2920, 5.3164, 4.9902], device='cuda:0') 2024-03-15 22:05:16,703 INFO [train_char.py:721] (0/2) Epoch 12, validation: loss=0.07356, simple_loss=0.1286, pruned_loss=0.009258, over 657665.00 frames. 2024-03-15 22:05:16,704 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 22:05:16,956 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_skip_rate, batch_count=18590.0, ans=0.0 2024-03-15 22:05:26,291 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.2.self_attn_weights, loss-sum=0.000e+00 2024-03-15 22:05:32,945 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.bypass.skip_rate, batch_count=18623.333333333332, ans=0.07 2024-03-15 22:05:34,248 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff3_skip_rate, batch_count=18623.333333333332, ans=0.006821014492753624 2024-03-15 22:05:48,302 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module1.balancer2.prob, batch_count=18656.666666666668, ans=0.125 2024-03-15 22:06:01,478 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.627e+01 1.040e+02 1.285e+02 1.688e+02 2.682e+02, threshold=2.570e+02, percent-clipped=0.0 2024-03-15 22:06:19,373 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=18723.333333333332, ans=0.11276666666666668 2024-03-15 22:06:29,901 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.conv_module2.whiten, num_groups=1, num_channels=384, metric=13.85 vs. limit=14.52125 2024-03-15 22:06:34,475 INFO [train_char.py:689] (0/2) Epoch 12, batch 50, loss[loss=0.08466, simple_loss=0.1302, pruned_loss=0.01958, over 24385.00 frames. ], tot_loss[loss=0.09323, simple_loss=0.1429, pruned_loss=0.02176, over 1088799.35 frames. ], batch size: 152, lr: 2.22e-02, grad_scale: 16.0 2024-03-15 22:06:50,895 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=18790.0, ans=0.125 2024-03-15 22:06:56,478 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.nonlin_attention.balancer.prob, batch_count=18790.0, ans=0.125 2024-03-15 22:07:15,902 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.attention_skip_rate, batch_count=18856.666666666668, ans=0.0 2024-03-15 22:07:34,029 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.balancer_ff3.min_abs, batch_count=18890.0, ans=0.2 2024-03-15 22:07:40,384 INFO [train_char.py:689] (0/2) Epoch 12, batch 100, loss[loss=0.1105, simple_loss=0.1712, pruned_loss=0.02493, over 24123.00 frames. ], tot_loss[loss=0.09556, simple_loss=0.1458, pruned_loss=0.02265, over 1909203.66 frames. ], batch size: 223, lr: 2.21e-02, grad_scale: 16.0 2024-03-15 22:07:44,231 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer2.min_abs, batch_count=18923.333333333332, ans=0.48384999999999995 2024-03-15 22:08:02,512 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff2_skip_rate, batch_count=18956.666666666668, ans=0.0067485507246376805 2024-03-15 22:08:19,134 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.176e+01 1.103e+02 1.316e+02 1.837e+02 3.115e+02, threshold=2.631e+02, percent-clipped=6.0 2024-03-15 22:08:37,914 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=19056.666666666668, ans=0.10943333333333333 2024-03-15 22:08:39,038 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass_mid.scale_min, batch_count=19056.666666666668, ans=0.23301666666666676 2024-03-15 22:08:39,109 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=19056.666666666668, ans=0.125 2024-03-15 22:08:49,986 INFO [train_char.py:689] (0/2) Epoch 12, batch 150, loss[loss=0.09687, simple_loss=0.1484, pruned_loss=0.02266, over 24245.00 frames. ], tot_loss[loss=0.09424, simple_loss=0.1442, pruned_loss=0.02216, over 2549387.84 frames. ], batch size: 328, lr: 2.21e-02, grad_scale: 16.0 2024-03-15 22:09:40,761 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=19190.0, ans=0.125 2024-03-15 22:09:52,399 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=19223.333333333332, ans=0.10776666666666668 2024-03-15 22:09:58,937 INFO [train_char.py:689] (0/2) Epoch 12, batch 200, loss[loss=0.07431, simple_loss=0.1019, pruned_loss=0.02334, over 22676.00 frames. ], tot_loss[loss=0.094, simple_loss=0.144, pruned_loss=0.02199, over 3052337.00 frames. ], batch size: 483, lr: 2.21e-02, grad_scale: 16.0 2024-03-15 22:10:05,846 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer2.prob, batch_count=19256.666666666668, ans=0.125 2024-03-15 22:10:11,975 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=19290.0, ans=0.1071 2024-03-15 22:10:37,613 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.260e+01 1.092e+02 1.355e+02 1.626e+02 2.369e+02, threshold=2.710e+02, percent-clipped=0.0 2024-03-15 22:11:03,384 INFO [train_char.py:689] (0/2) Epoch 12, batch 250, loss[loss=0.1026, simple_loss=0.1622, pruned_loss=0.02153, over 24330.00 frames. ], tot_loss[loss=0.09367, simple_loss=0.1438, pruned_loss=0.02175, over 3442343.18 frames. ], batch size: 180, lr: 2.20e-02, grad_scale: 16.0 2024-03-15 22:11:04,049 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.self_attn2.whiten, num_groups=1, num_channels=512, metric=15.25 vs. limit=22.067500000000003 2024-03-15 22:11:04,874 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_skip_rate, batch_count=19423.333333333332, ans=0.0 2024-03-15 22:11:31,092 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.self_attn2.whiten, num_groups=1, num_channels=512, metric=15.25 vs. limit=22.0925 2024-03-15 22:11:44,014 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=256, metric=8.07 vs. limit=14.80875 2024-03-15 22:11:56,427 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=19523.333333333332, ans=0.10476666666666667 2024-03-15 22:12:05,752 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.2.self_attn_weights, loss-sum=0.000e+00 2024-03-15 22:12:12,983 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=19590.0, ans=0.1041 2024-03-15 22:12:14,000 INFO [train_char.py:689] (0/2) Epoch 12, batch 300, loss[loss=0.07799, simple_loss=0.1265, pruned_loss=0.01476, over 24375.00 frames. ], tot_loss[loss=0.09386, simple_loss=0.1443, pruned_loss=0.0217, over 3750336.38 frames. ], batch size: 129, lr: 2.20e-02, grad_scale: 16.0 2024-03-15 22:12:45,080 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=384, metric=9.94 vs. limit=14.87125 2024-03-15 22:12:51,973 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.720e+01 1.174e+02 1.515e+02 1.923e+02 3.546e+02, threshold=3.029e+02, percent-clipped=9.0 2024-03-15 22:12:52,730 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=512, metric=17.85 vs. limit=22.2675 2024-03-15 22:12:55,223 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=7.54 vs. limit=9.9225 2024-03-15 22:12:58,499 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer2.prob, batch_count=19690.0, ans=0.125 2024-03-15 22:13:06,275 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer_na.min_abs, batch_count=19723.333333333332, ans=0.02 2024-03-15 22:13:17,451 INFO [train_char.py:689] (0/2) Epoch 12, batch 350, loss[loss=0.07302, simple_loss=0.114, pruned_loss=0.016, over 23760.00 frames. ], tot_loss[loss=0.09339, simple_loss=0.1437, pruned_loss=0.02153, over 3988522.53 frames. ], batch size: 439, lr: 2.19e-02, grad_scale: 16.0 2024-03-15 22:14:24,196 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.hidden_balancer.prob, batch_count=19923.333333333332, ans=0.125 2024-03-15 22:14:25,149 INFO [train_char.py:689] (0/2) Epoch 12, batch 400, loss[loss=0.09029, simple_loss=0.1463, pruned_loss=0.01716, over 24132.00 frames. ], tot_loss[loss=0.09386, simple_loss=0.1448, pruned_loss=0.02145, over 4178462.04 frames. ], batch size: 188, lr: 2.19e-02, grad_scale: 32.0 2024-03-15 22:14:25,379 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.balancer_ff2.min_abs, batch_count=19923.333333333332, ans=0.1 2024-03-15 22:15:00,975 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=8.87 vs. limit=14.99625 2024-03-15 22:15:06,268 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.057e+01 1.097e+02 1.316e+02 1.719e+02 2.765e+02, threshold=2.632e+02, percent-clipped=0.0 2024-03-15 22:15:21,520 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=20056.666666666668, ans=0.125 2024-03-15 22:15:30,397 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.bypass_mid.scale_min, batch_count=20090.0, ans=0.2 2024-03-15 22:15:31,375 INFO [train_char.py:689] (0/2) Epoch 12, batch 450, loss[loss=0.1018, simple_loss=0.1547, pruned_loss=0.02444, over 24136.00 frames. ], tot_loss[loss=0.09508, simple_loss=0.1467, pruned_loss=0.02172, over 4324088.29 frames. ], batch size: 188, lr: 2.19e-02, grad_scale: 16.0 2024-03-15 22:15:33,519 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=5.41 vs. limit=12.0 2024-03-15 22:15:35,894 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=384, metric=18.53 vs. limit=22.5 2024-03-15 22:15:41,558 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module1.balancer1.prob, batch_count=20090.0, ans=0.125 2024-03-15 22:16:00,259 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.balancer2.prob, batch_count=20156.666666666668, ans=0.125 2024-03-15 22:16:16,579 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.whiten, num_groups=1, num_channels=192, metric=5.00 vs. limit=12.0 2024-03-15 22:16:34,944 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=512, metric=4.10 vs. limit=15.0 2024-03-15 22:16:35,034 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.12 vs. limit=6.0 2024-03-15 22:16:35,448 INFO [train_char.py:689] (0/2) Epoch 12, batch 500, loss[loss=0.1095, simple_loss=0.166, pruned_loss=0.02653, over 24107.00 frames. ], tot_loss[loss=0.09701, simple_loss=0.1496, pruned_loss=0.02221, over 4436965.90 frames. ], batch size: 251, lr: 2.18e-02, grad_scale: 16.0 2024-03-15 22:16:44,193 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-12.pt 2024-03-15 22:17:36,115 INFO [train_char.py:689] (0/2) Epoch 13, batch 0, loss[loss=0.1104, simple_loss=0.165, pruned_loss=0.02788, over 24025.00 frames. ], tot_loss[loss=0.1104, simple_loss=0.165, pruned_loss=0.02788, over 24025.00 frames. ], batch size: 199, lr: 2.10e-02, grad_scale: 32.0 2024-03-15 22:17:36,116 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 22:17:45,843 INFO [zipformer.py:1858] (0/2) name=encoder.encoders.2.encoder.layers.2.self_attn_weights, attn_weights_entropy = tensor([2.9684, 3.0176, 3.6072, 3.4185], device='cuda:0') 2024-03-15 22:17:54,658 INFO [train_char.py:721] (0/2) Epoch 13, validation: loss=0.0715, simple_loss=0.1264, pruned_loss=0.008321, over 657665.00 frames. 2024-03-15 22:17:54,658 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 22:17:56,368 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward2.hidden_balancer.prob, batch_count=20280.0, ans=0.125 2024-03-15 22:17:56,378 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module1.balancer1.min_positive, batch_count=20280.0, ans=0.025 2024-03-15 22:17:59,024 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=20280.0, ans=0.0 2024-03-15 22:18:00,460 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=20280.0, ans=0.125 2024-03-15 22:18:11,150 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.ff3_skip_rate, batch_count=20313.333333333332, ans=0.006453623188405797 2024-03-15 22:18:27,335 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.294e+01 1.085e+02 1.296e+02 1.642e+02 3.122e+02, threshold=2.592e+02, percent-clipped=1.0 2024-03-15 22:18:29,028 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=20346.666666666668, ans=0.125 2024-03-15 22:18:30,273 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=20346.666666666668, ans=0.125 2024-03-15 22:18:39,635 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=20380.0, ans=0.125 2024-03-15 22:18:45,728 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer2.min_positive, batch_count=20380.0, ans=0.05 2024-03-15 22:19:04,179 INFO [train_char.py:689] (0/2) Epoch 13, batch 50, loss[loss=0.1077, simple_loss=0.1671, pruned_loss=0.02417, over 24291.00 frames. ], tot_loss[loss=0.09078, simple_loss=0.1427, pruned_loss=0.01945, over 1086986.65 frames. ], batch size: 267, lr: 2.09e-02, grad_scale: 32.0 2024-03-15 22:19:05,074 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=512, metric=6.17 vs. limit=15.0 2024-03-15 22:19:33,388 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.conv_module2.whiten, num_groups=1, num_channels=384, metric=11.14 vs. limit=15.0 2024-03-15 22:19:34,418 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.out_combiner.scale_min, batch_count=20513.333333333332, ans=0.2 2024-03-15 22:19:42,114 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.skip_rate, batch_count=20513.333333333332, ans=0.07 2024-03-15 22:19:55,209 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.ff3_skip_rate, batch_count=20546.666666666668, ans=0.006402898550724638 2024-03-15 22:20:15,531 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=20613.333333333332, ans=0.125 2024-03-15 22:20:16,621 INFO [train_char.py:689] (0/2) Epoch 13, batch 100, loss[loss=0.08688, simple_loss=0.1358, pruned_loss=0.01898, over 24455.00 frames. ], tot_loss[loss=0.08895, simple_loss=0.14, pruned_loss=0.01897, over 1906620.79 frames. ], batch size: 165, lr: 2.09e-02, grad_scale: 32.0 2024-03-15 22:20:27,203 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=20613.333333333332, ans=0.125 2024-03-15 22:20:29,544 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.attention_skip_rate, batch_count=20646.666666666668, ans=0.0 2024-03-15 22:20:37,262 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=192, metric=7.62 vs. limit=15.0 2024-03-15 22:20:41,557 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.ff2_skip_rate, batch_count=20680.0, ans=0.006373913043478261 2024-03-15 22:20:46,601 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass_mid.scale_min, batch_count=20680.0, ans=0.2 2024-03-15 22:20:49,099 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.262e+01 1.069e+02 1.330e+02 1.855e+02 3.421e+02, threshold=2.660e+02, percent-clipped=6.0 2024-03-15 22:21:02,600 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_skip_rate, batch_count=20713.333333333332, ans=0.0 2024-03-15 22:21:09,318 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer1.prob, batch_count=20746.666666666668, ans=0.125 2024-03-15 22:21:10,488 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer2.prob, batch_count=20746.666666666668, ans=0.125 2024-03-15 22:21:14,247 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module1.balancer2.prob, batch_count=20746.666666666668, ans=0.125 2024-03-15 22:21:19,245 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward2.hidden_balancer.prob, batch_count=20746.666666666668, ans=0.125 2024-03-15 22:21:20,555 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.balancer1.prob, batch_count=20780.0, ans=0.125 2024-03-15 22:21:21,407 INFO [train_char.py:689] (0/2) Epoch 13, batch 150, loss[loss=0.06738, simple_loss=0.1119, pruned_loss=0.01142, over 24373.00 frames. ], tot_loss[loss=0.0897, simple_loss=0.1413, pruned_loss=0.01904, over 2552851.34 frames. ], batch size: 129, lr: 2.09e-02, grad_scale: 16.0 2024-03-15 22:21:24,209 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer2.prob, batch_count=20780.0, ans=0.125 2024-03-15 22:21:32,486 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module1.balancer2.min_abs, batch_count=20780.0, ans=0.5 2024-03-15 22:22:30,659 INFO [train_char.py:689] (0/2) Epoch 13, batch 200, loss[loss=0.06862, simple_loss=0.119, pruned_loss=0.009109, over 24251.00 frames. ], tot_loss[loss=0.09053, simple_loss=0.1423, pruned_loss=0.01938, over 3056875.70 frames. ], batch size: 134, lr: 2.08e-02, grad_scale: 16.0 2024-03-15 22:22:50,802 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.ff3_skip_rate, batch_count=20980.0, ans=0.006308695652173913 2024-03-15 22:22:56,057 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.prob, batch_count=21013.333333333332, ans=0.125 2024-03-15 22:23:06,059 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.679e+01 1.109e+02 1.340e+02 1.722e+02 4.051e+02, threshold=2.681e+02, percent-clipped=3.0 2024-03-15 22:23:10,581 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=512, metric=12.43 vs. limit=15.0 2024-03-15 22:23:37,394 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=5.49 vs. limit=10.0 2024-03-15 22:23:38,089 INFO [train_char.py:689] (0/2) Epoch 13, batch 250, loss[loss=0.08998, simple_loss=0.1434, pruned_loss=0.01828, over 20685.00 frames. ], tot_loss[loss=0.09011, simple_loss=0.1416, pruned_loss=0.01933, over 3445003.39 frames. ], batch size: 82, lr: 2.08e-02, grad_scale: 16.0 2024-03-15 22:23:53,597 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward2.hidden_balancer.prob, batch_count=21146.666666666668, ans=0.125 2024-03-15 22:24:01,267 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=21146.666666666668, ans=0.1 2024-03-15 22:24:06,303 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=21180.0, ans=0.0 2024-03-15 22:24:10,561 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.24 vs. limit=6.0 2024-03-15 22:24:32,572 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.hidden_balancer.prob, batch_count=21246.666666666668, ans=0.125 2024-03-15 22:24:36,270 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward2.hidden_balancer.prob, batch_count=21246.666666666668, ans=0.125 2024-03-15 22:24:36,330 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.balancer.max_positive, batch_count=21246.666666666668, ans=0.95 2024-03-15 22:24:36,626 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=12.43 vs. limit=22.5 2024-03-15 22:24:43,848 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer2.prob, batch_count=21280.0, ans=0.125 2024-03-15 22:24:44,878 INFO [train_char.py:689] (0/2) Epoch 13, batch 300, loss[loss=0.1059, simple_loss=0.167, pruned_loss=0.02245, over 24157.00 frames. ], tot_loss[loss=0.09014, simple_loss=0.1414, pruned_loss=0.01943, over 3751289.52 frames. ], batch size: 251, lr: 2.08e-02, grad_scale: 16.0 2024-03-15 22:24:45,132 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module2.balancer1.prob, batch_count=21280.0, ans=0.125 2024-03-15 22:24:48,001 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=256, metric=3.77 vs. limit=15.0 2024-03-15 22:25:07,270 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=21313.333333333332, ans=0.0 2024-03-15 22:25:16,549 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=4.76 vs. limit=6.0 2024-03-15 22:25:19,874 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.169e+01 1.103e+02 1.320e+02 1.684e+02 3.519e+02, threshold=2.641e+02, percent-clipped=4.0 2024-03-15 22:25:24,988 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=21380.0, ans=0.1 2024-03-15 22:25:51,543 INFO [train_char.py:689] (0/2) Epoch 13, batch 350, loss[loss=0.1128, simple_loss=0.1681, pruned_loss=0.02873, over 24205.00 frames. ], tot_loss[loss=0.09164, simple_loss=0.1435, pruned_loss=0.01988, over 3985740.29 frames. ], batch size: 212, lr: 2.07e-02, grad_scale: 16.0 2024-03-15 22:25:59,520 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer2.prob, batch_count=21446.666666666668, ans=0.125 2024-03-15 22:26:06,046 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.balancer2.prob, batch_count=21480.0, ans=0.125 2024-03-15 22:26:28,320 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=21513.333333333332, ans=0.125 2024-03-15 22:26:58,594 INFO [train_char.py:689] (0/2) Epoch 13, batch 400, loss[loss=0.07907, simple_loss=0.1299, pruned_loss=0.01411, over 24248.00 frames. ], tot_loss[loss=0.0916, simple_loss=0.1436, pruned_loss=0.01981, over 4176469.92 frames. ], batch size: 140, lr: 2.07e-02, grad_scale: 32.0 2024-03-15 22:27:25,148 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.ff2_skip_rate, batch_count=21680.0, ans=0.006156521739130435 2024-03-15 22:27:29,148 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=21680.0, ans=0.125 2024-03-15 22:27:29,985 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.006e+01 1.047e+02 1.201e+02 1.533e+02 2.664e+02, threshold=2.402e+02, percent-clipped=3.0 2024-03-15 22:27:31,500 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.balancer1.prob, batch_count=21680.0, ans=0.125 2024-03-15 22:27:37,694 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff2_skip_rate, batch_count=21713.333333333332, ans=0.006149275362318841 2024-03-15 22:28:01,304 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_skip_rate, batch_count=21780.0, ans=0.0 2024-03-15 22:28:02,322 INFO [train_char.py:689] (0/2) Epoch 13, batch 450, loss[loss=0.111, simple_loss=0.1664, pruned_loss=0.02778, over 24106.00 frames. ], tot_loss[loss=0.09242, simple_loss=0.1446, pruned_loss=0.02012, over 4323860.72 frames. ], batch size: 223, lr: 2.07e-02, grad_scale: 32.0 2024-03-15 22:28:07,409 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=21780.0, ans=0.125 2024-03-15 22:28:08,673 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.balancer.max_positive, batch_count=21780.0, ans=0.95 2024-03-15 22:28:18,947 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=21813.333333333332, ans=0.1 2024-03-15 22:28:18,970 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.ff2_skip_rate, batch_count=21813.333333333332, ans=0.006127536231884058 2024-03-15 22:28:40,470 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=7.54 vs. limit=10.0 2024-03-15 22:28:45,033 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer2.prob, batch_count=21880.0, ans=0.125 2024-03-15 22:28:47,557 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.out_combiner.scale_min, batch_count=21880.0, ans=0.2 2024-03-15 22:28:58,111 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.skip_rate, batch_count=21913.333333333332, ans=0.04949747468305833 2024-03-15 22:29:08,185 INFO [train_char.py:689] (0/2) Epoch 13, batch 500, loss[loss=0.09375, simple_loss=0.1519, pruned_loss=0.0178, over 24109.00 frames. ], tot_loss[loss=0.09365, simple_loss=0.1468, pruned_loss=0.02023, over 4435796.07 frames. ], batch size: 199, lr: 2.06e-02, grad_scale: 32.0 2024-03-15 22:29:17,046 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-13.pt 2024-03-15 22:30:09,786 INFO [train_char.py:689] (0/2) Epoch 14, batch 0, loss[loss=0.09307, simple_loss=0.1423, pruned_loss=0.02192, over 24282.00 frames. ], tot_loss[loss=0.09307, simple_loss=0.1423, pruned_loss=0.02192, over 24282.00 frames. ], batch size: 296, lr: 1.99e-02, grad_scale: 32.0 2024-03-15 22:30:09,787 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 22:30:22,278 INFO [zipformer.py:1858] (0/2) name=encoder.encoders.3.encoder.layers.2.self_attn_weights, attn_weights_entropy = tensor([2.8947, 2.4515, 2.7743, 2.4000, 2.6389, 1.6878, 2.8035, 2.3088], device='cuda:0') 2024-03-15 22:30:23,551 INFO [train_char.py:721] (0/2) Epoch 14, validation: loss=0.07044, simple_loss=0.125, pruned_loss=0.007917, over 657665.00 frames. 2024-03-15 22:30:23,552 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 22:30:36,552 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer1.prob, batch_count=22003.333333333332, ans=0.125 2024-03-15 22:30:40,984 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=12.86 vs. limit=15.0 2024-03-15 22:30:48,074 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.362e+01 1.053e+02 1.320e+02 1.724e+02 3.044e+02, threshold=2.640e+02, percent-clipped=7.0 2024-03-15 22:31:15,139 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_skip_rate, batch_count=22070.0, ans=0.0 2024-03-15 22:31:31,841 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=512, metric=16.03 vs. limit=15.0 2024-03-15 22:31:34,111 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.bypass_mid.scale_min, batch_count=22103.333333333332, ans=0.2 2024-03-15 22:31:39,352 INFO [train_char.py:689] (0/2) Epoch 14, batch 50, loss[loss=0.09322, simple_loss=0.1496, pruned_loss=0.0184, over 24344.00 frames. ], tot_loss[loss=0.09079, simple_loss=0.1435, pruned_loss=0.01906, over 1083015.52 frames. ], batch size: 180, lr: 1.99e-02, grad_scale: 32.0 2024-03-15 22:32:01,196 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff2_skip_rate, batch_count=22170.0, ans=0.00605 2024-03-15 22:32:02,679 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=22170.0, ans=0.125 2024-03-15 22:32:03,974 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=22170.0, ans=0.1 2024-03-15 22:32:09,067 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_skip_rate, batch_count=22203.333333333332, ans=0.0 2024-03-15 22:32:17,345 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=10.16 vs. limit=15.0 2024-03-15 22:32:27,625 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=7.60 vs. limit=15.0 2024-03-15 22:32:45,111 INFO [train_char.py:689] (0/2) Epoch 14, batch 100, loss[loss=0.08304, simple_loss=0.1298, pruned_loss=0.01816, over 24161.00 frames. ], tot_loss[loss=0.08909, simple_loss=0.141, pruned_loss=0.01861, over 1915348.57 frames. ], batch size: 362, lr: 1.98e-02, grad_scale: 32.0 2024-03-15 22:33:06,070 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.balancer2.prob, batch_count=22336.666666666668, ans=0.125 2024-03-15 22:33:08,438 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.509e+01 1.179e+02 1.448e+02 1.825e+02 3.837e+02, threshold=2.896e+02, percent-clipped=4.0 2024-03-15 22:33:47,017 INFO [scaling.py:1023] (0/2) Whitening: name=encoder_embed.convnext.out_whiten, num_groups=1, num_channels=128, metric=4.28 vs. limit=5.0 2024-03-15 22:33:48,652 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_skip_rate, batch_count=22436.666666666668, ans=0.0 2024-03-15 22:33:58,166 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_skip_rate, batch_count=22470.0, ans=0.0 2024-03-15 22:33:59,115 INFO [train_char.py:689] (0/2) Epoch 14, batch 150, loss[loss=0.1022, simple_loss=0.1576, pruned_loss=0.02338, over 24137.00 frames. ], tot_loss[loss=0.08818, simple_loss=0.1392, pruned_loss=0.0186, over 2551879.86 frames. ], batch size: 188, lr: 1.98e-02, grad_scale: 16.0 2024-03-15 22:34:03,388 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=22470.0, ans=0.0 2024-03-15 22:34:04,607 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.balancer2.prob, batch_count=22470.0, ans=0.125 2024-03-15 22:34:24,055 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module1.balancer2.min_positive, batch_count=22536.666666666668, ans=0.05 2024-03-15 22:34:42,576 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=17.01 vs. limit=15.0 2024-03-15 22:34:49,964 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.attention_skip_rate, batch_count=22603.333333333332, ans=0.0 2024-03-15 22:34:51,362 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.balancer1.prob, batch_count=22603.333333333332, ans=0.125 2024-03-15 22:34:56,199 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff2_skip_rate, batch_count=22603.333333333332, ans=0.005955797101449276 2024-03-15 22:35:01,599 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.ff2_skip_rate, batch_count=22603.333333333332, ans=0.005955797101449276 2024-03-15 22:35:03,878 INFO [train_char.py:689] (0/2) Epoch 14, batch 200, loss[loss=0.08846, simple_loss=0.136, pruned_loss=0.02048, over 24289.00 frames. ], tot_loss[loss=0.08807, simple_loss=0.1395, pruned_loss=0.01833, over 3059219.74 frames. ], batch size: 328, lr: 1.98e-02, grad_scale: 16.0 2024-03-15 22:35:10,402 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer1.prob, batch_count=22636.666666666668, ans=0.125 2024-03-15 22:35:22,625 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=14.78 vs. limit=15.0 2024-03-15 22:35:23,509 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.ff3_skip_rate, batch_count=22670.0, ans=0.005941304347826087 2024-03-15 22:35:28,458 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.371e+01 1.071e+02 1.309e+02 1.701e+02 3.687e+02, threshold=2.619e+02, percent-clipped=6.0 2024-03-15 22:35:40,368 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-15 22:35:51,720 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module1.balancer2.min_abs, batch_count=22736.666666666668, ans=0.5 2024-03-15 22:36:06,119 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=22770.0, ans=0.125 2024-03-15 22:36:08,284 INFO [train_char.py:689] (0/2) Epoch 14, batch 250, loss[loss=0.08814, simple_loss=0.1394, pruned_loss=0.01846, over 24258.00 frames. ], tot_loss[loss=0.08779, simple_loss=0.1392, pruned_loss=0.01816, over 3448820.12 frames. ], batch size: 116, lr: 1.97e-02, grad_scale: 16.0 2024-03-15 22:36:21,571 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=12.63 vs. limit=15.0 2024-03-15 22:36:24,058 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=5.62 vs. limit=10.0 2024-03-15 22:36:51,593 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=256, metric=3.30 vs. limit=15.0 2024-03-15 22:36:56,103 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.dropout.p, batch_count=22903.333333333332, ans=0.1 2024-03-15 22:37:06,706 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=22936.666666666668, ans=0.1 2024-03-15 22:37:19,302 INFO [train_char.py:689] (0/2) Epoch 14, batch 300, loss[loss=0.1099, simple_loss=0.1725, pruned_loss=0.02366, over 24151.00 frames. ], tot_loss[loss=0.08799, simple_loss=0.1396, pruned_loss=0.0182, over 3755880.49 frames. ], batch size: 223, lr: 1.97e-02, grad_scale: 16.0 2024-03-15 22:37:43,067 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.042e+01 1.084e+02 1.249e+02 1.783e+02 3.247e+02, threshold=2.497e+02, percent-clipped=4.0 2024-03-15 22:37:58,813 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.balancer1.prob, batch_count=23070.0, ans=0.125 2024-03-15 22:38:10,526 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.bypass.scale_min, batch_count=23103.333333333332, ans=0.2 2024-03-15 22:38:15,443 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module2.balancer2.prob, batch_count=23103.333333333332, ans=0.125 2024-03-15 22:38:17,337 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=192, metric=9.04 vs. limit=15.0 2024-03-15 22:38:22,485 INFO [train_char.py:689] (0/2) Epoch 14, batch 350, loss[loss=0.08172, simple_loss=0.1355, pruned_loss=0.01398, over 24373.00 frames. ], tot_loss[loss=0.08827, simple_loss=0.14, pruned_loss=0.01827, over 3990861.61 frames. ], batch size: 158, lr: 1.97e-02, grad_scale: 16.0 2024-03-15 22:38:23,883 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.conv.5.prob, batch_count=23136.666666666668, ans=0.125 2024-03-15 22:39:03,590 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=23236.666666666668, ans=0.125 2024-03-15 22:39:18,018 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.nonlin_attention.whiten2, num_groups=1, num_channels=512, metric=6.05 vs. limit=15.0 2024-03-15 22:39:26,540 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_skip_rate, batch_count=23270.0, ans=0.0 2024-03-15 22:39:29,947 INFO [train_char.py:689] (0/2) Epoch 14, batch 400, loss[loss=0.08353, simple_loss=0.1331, pruned_loss=0.01699, over 24205.00 frames. ], tot_loss[loss=0.08876, simple_loss=0.141, pruned_loss=0.01827, over 4174686.71 frames. ], batch size: 328, lr: 1.96e-02, grad_scale: 32.0 2024-03-15 22:39:44,340 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-15 22:39:48,097 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer2.prob, batch_count=23336.666666666668, ans=0.125 2024-03-15 22:39:54,046 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.676e+01 1.024e+02 1.200e+02 1.525e+02 3.223e+02, threshold=2.400e+02, percent-clipped=2.0 2024-03-15 22:39:56,997 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass.skip_rate, batch_count=23370.0, ans=0.04949747468305833 2024-03-15 22:40:12,370 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=23403.333333333332, ans=0.125 2024-03-15 22:40:13,602 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=23403.333333333332, ans=0.125 2024-03-15 22:40:31,784 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=23436.666666666668, ans=0.1 2024-03-15 22:40:33,095 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=23436.666666666668, ans=0.1 2024-03-15 22:40:35,461 INFO [train_char.py:689] (0/2) Epoch 14, batch 450, loss[loss=0.0918, simple_loss=0.1465, pruned_loss=0.01853, over 24346.00 frames. ], tot_loss[loss=0.08965, simple_loss=0.1422, pruned_loss=0.01853, over 4321284.20 frames. ], batch size: 172, lr: 1.96e-02, grad_scale: 32.0 2024-03-15 22:40:36,137 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=384, metric=8.81 vs. limit=15.0 2024-03-15 22:40:42,942 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass.skip_rate, batch_count=23470.0, ans=0.035 2024-03-15 22:40:58,163 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.whiten, num_groups=1, num_channels=384, metric=7.04 vs. limit=12.0 2024-03-15 22:41:06,109 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module1.balancer2.prob, batch_count=23536.666666666668, ans=0.125 2024-03-15 22:41:37,191 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=23603.333333333332, ans=0.125 2024-03-15 22:41:39,555 INFO [train_char.py:689] (0/2) Epoch 14, batch 500, loss[loss=0.1021, simple_loss=0.1569, pruned_loss=0.02364, over 24048.00 frames. ], tot_loss[loss=0.09098, simple_loss=0.1444, pruned_loss=0.01878, over 4434538.55 frames. ], batch size: 199, lr: 1.96e-02, grad_scale: 16.0 2024-03-15 22:41:44,353 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=7.53 vs. limit=10.0 2024-03-15 22:41:48,960 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-14.pt 2024-03-15 22:42:40,779 INFO [train_char.py:689] (0/2) Epoch 15, batch 0, loss[loss=0.07605, simple_loss=0.1223, pruned_loss=0.01491, over 24022.00 frames. ], tot_loss[loss=0.07605, simple_loss=0.1223, pruned_loss=0.01491, over 24022.00 frames. ], batch size: 381, lr: 1.89e-02, grad_scale: 32.0 2024-03-15 22:42:40,780 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 22:42:47,202 INFO [zipformer.py:1858] (0/2) name=encoder.encoders.3.encoder.layers.2.self_attn_weights, attn_weights_entropy = tensor([3.1940, 2.6538, 2.8227, 2.5884, 2.7742, 2.0239, 2.9138, 2.6246], device='cuda:0') 2024-03-15 22:42:54,296 INFO [train_char.py:721] (0/2) Epoch 15, validation: loss=0.06886, simple_loss=0.1225, pruned_loss=0.007616, over 657665.00 frames. 2024-03-15 22:42:54,296 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 22:42:55,935 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=23660.0, ans=0.1 2024-03-15 22:43:06,638 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=23693.333333333332, ans=0.125 2024-03-15 22:43:11,608 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.091e+01 9.908e+01 1.153e+02 1.389e+02 3.573e+02, threshold=2.307e+02, percent-clipped=1.0 2024-03-15 22:43:27,100 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=23726.666666666668, ans=0.1 2024-03-15 22:43:35,175 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.conv.8.prob, batch_count=23726.666666666668, ans=0.125 2024-03-15 22:43:48,636 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=23760.0, ans=0.0 2024-03-15 22:43:55,374 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=23793.333333333332, ans=0.125 2024-03-15 22:44:01,956 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer2.prob, batch_count=23793.333333333332, ans=0.125 2024-03-15 22:44:03,345 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward2.hidden_balancer.prob, batch_count=23793.333333333332, ans=0.125 2024-03-15 22:44:05,856 INFO [train_char.py:689] (0/2) Epoch 15, batch 50, loss[loss=0.06929, simple_loss=0.1183, pruned_loss=0.01016, over 24198.00 frames. ], tot_loss[loss=0.08627, simple_loss=0.1378, pruned_loss=0.01735, over 1082353.72 frames. ], batch size: 122, lr: 1.89e-02, grad_scale: 32.0 2024-03-15 22:44:07,627 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=23826.666666666668, ans=0.1 2024-03-15 22:45:05,043 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.balancer1.prob, batch_count=23960.0, ans=0.125 2024-03-15 22:45:17,462 INFO [train_char.py:689] (0/2) Epoch 15, batch 100, loss[loss=0.0962, simple_loss=0.1478, pruned_loss=0.02231, over 24364.00 frames. ], tot_loss[loss=0.08515, simple_loss=0.1362, pruned_loss=0.01703, over 1908663.33 frames. ], batch size: 297, lr: 1.88e-02, grad_scale: 32.0 2024-03-15 22:45:19,554 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass.scale_min, batch_count=23993.333333333332, ans=0.2 2024-03-15 22:45:28,696 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff2_skip_rate, batch_count=23993.333333333332, ans=0.005653623188405797 2024-03-15 22:45:35,171 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_skip_rate, batch_count=24026.666666666668, ans=0.0 2024-03-15 22:45:39,810 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.932e+01 1.121e+02 1.256e+02 1.613e+02 3.118e+02, threshold=2.512e+02, percent-clipped=5.0 2024-03-15 22:46:27,880 INFO [train_char.py:689] (0/2) Epoch 15, batch 150, loss[loss=0.0901, simple_loss=0.1458, pruned_loss=0.01718, over 24098.00 frames. ], tot_loss[loss=0.0851, simple_loss=0.1367, pruned_loss=0.01674, over 2548351.29 frames. ], batch size: 188, lr: 1.88e-02, grad_scale: 32.0 2024-03-15 22:46:41,115 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=24193.333333333332, ans=0.1 2024-03-15 22:47:06,867 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer1.prob, batch_count=24260.0, ans=0.125 2024-03-15 22:47:09,459 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=24260.0, ans=0.0 2024-03-15 22:47:14,741 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer1.prob, batch_count=24260.0, ans=0.125 2024-03-15 22:47:32,161 INFO [train_char.py:689] (0/2) Epoch 15, batch 200, loss[loss=0.08654, simple_loss=0.1405, pruned_loss=0.01629, over 24379.00 frames. ], tot_loss[loss=0.08522, simple_loss=0.1364, pruned_loss=0.017, over 3043397.56 frames. ], batch size: 172, lr: 1.88e-02, grad_scale: 32.0 2024-03-15 22:47:33,015 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=16.31 vs. limit=15.0 2024-03-15 22:47:36,161 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=24326.666666666668, ans=0.125 2024-03-15 22:47:52,615 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.835e+01 1.004e+02 1.271e+02 1.513e+02 3.157e+02, threshold=2.541e+02, percent-clipped=4.0 2024-03-15 22:47:57,990 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.dropout.p, batch_count=24360.0, ans=0.1 2024-03-15 22:48:09,711 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.bypass.scale_min, batch_count=24393.333333333332, ans=0.2 2024-03-15 22:48:09,725 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.balancer.max_positive, batch_count=24393.333333333332, ans=0.95 2024-03-15 22:48:43,204 INFO [train_char.py:689] (0/2) Epoch 15, batch 250, loss[loss=0.1063, simple_loss=0.1712, pruned_loss=0.02073, over 24124.00 frames. ], tot_loss[loss=0.08619, simple_loss=0.1379, pruned_loss=0.01723, over 3437759.78 frames. ], batch size: 223, lr: 1.87e-02, grad_scale: 16.0 2024-03-15 22:48:52,201 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module1.balancer1.min_positive, batch_count=24493.333333333332, ans=0.025 2024-03-15 22:48:52,271 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-15 22:49:10,200 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_skip_rate, batch_count=24560.0, ans=0.0 2024-03-15 22:49:22,569 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff2_skip_rate, batch_count=24593.333333333332, ans=0.005523188405797102 2024-03-15 22:49:38,005 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=24626.666666666668, ans=0.1 2024-03-15 22:49:44,277 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=24626.666666666668, ans=0.125 2024-03-15 22:49:46,527 INFO [train_char.py:689] (0/2) Epoch 15, batch 300, loss[loss=0.08456, simple_loss=0.1358, pruned_loss=0.01666, over 24227.00 frames. ], tot_loss[loss=0.08539, simple_loss=0.1367, pruned_loss=0.01705, over 3742455.19 frames. ], batch size: 328, lr: 1.87e-02, grad_scale: 16.0 2024-03-15 22:49:53,282 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=24660.0, ans=0.0 2024-03-15 22:50:02,651 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=14.68 vs. limit=15.0 2024-03-15 22:50:04,342 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.889e+01 1.067e+02 1.348e+02 1.795e+02 4.377e+02, threshold=2.695e+02, percent-clipped=7.0 2024-03-15 22:50:04,615 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer2.prob, batch_count=24693.333333333332, ans=0.125 2024-03-15 22:50:45,075 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=13.74 vs. limit=15.0 2024-03-15 22:50:46,444 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=8.38 vs. limit=10.0 2024-03-15 22:50:53,642 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass.skip_rate, batch_count=24826.666666666668, ans=0.035 2024-03-15 22:50:54,779 INFO [train_char.py:689] (0/2) Epoch 15, batch 350, loss[loss=0.1044, simple_loss=0.1686, pruned_loss=0.02014, over 24161.00 frames. ], tot_loss[loss=0.08676, simple_loss=0.1391, pruned_loss=0.01723, over 3986568.16 frames. ], batch size: 251, lr: 1.87e-02, grad_scale: 16.0 2024-03-15 22:51:07,304 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-15 22:51:13,506 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.attention_skip_rate, batch_count=24860.0, ans=0.0 2024-03-15 22:51:26,349 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass.skip_rate, batch_count=24893.333333333332, ans=0.07 2024-03-15 22:51:38,987 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.balancer1.prob, batch_count=24926.666666666668, ans=0.125 2024-03-15 22:51:46,480 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer2.prob, batch_count=24960.0, ans=0.125 2024-03-15 22:51:59,426 INFO [train_char.py:689] (0/2) Epoch 15, batch 400, loss[loss=0.09832, simple_loss=0.157, pruned_loss=0.0198, over 24217.00 frames. ], tot_loss[loss=0.08756, simple_loss=0.1402, pruned_loss=0.01744, over 4176609.37 frames. ], batch size: 212, lr: 1.86e-02, grad_scale: 32.0 2024-03-15 22:52:14,658 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.out_combiner.scale_min, batch_count=25026.666666666668, ans=0.2 2024-03-15 22:52:16,898 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer1.prob, batch_count=25026.666666666668, ans=0.125 2024-03-15 22:52:17,768 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.899e+01 1.133e+02 1.427e+02 1.789e+02 2.973e+02, threshold=2.854e+02, percent-clipped=2.0 2024-03-15 22:52:29,086 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=25060.0, ans=0.1 2024-03-15 22:52:50,013 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=25126.666666666668, ans=0.125 2024-03-15 22:53:02,342 INFO [train_char.py:689] (0/2) Epoch 15, batch 450, loss[loss=0.09809, simple_loss=0.1579, pruned_loss=0.01915, over 24123.00 frames. ], tot_loss[loss=0.08864, simple_loss=0.1421, pruned_loss=0.01757, over 4323966.10 frames. ], batch size: 223, lr: 1.86e-02, grad_scale: 32.0 2024-03-15 22:53:20,278 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=25193.333333333332, ans=0.1 2024-03-15 22:53:36,458 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_skip_rate, batch_count=25226.666666666668, ans=0.0 2024-03-15 22:53:56,286 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_skip_rate, batch_count=25293.333333333332, ans=0.0 2024-03-15 22:54:07,941 INFO [train_char.py:689] (0/2) Epoch 15, batch 500, loss[loss=0.09105, simple_loss=0.1477, pruned_loss=0.01721, over 24151.00 frames. ], tot_loss[loss=0.08939, simple_loss=0.1436, pruned_loss=0.01759, over 4437182.93 frames. ], batch size: 188, lr: 1.86e-02, grad_scale: 32.0 2024-03-15 22:54:17,681 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-15.pt 2024-03-15 22:55:10,865 INFO [train_char.py:689] (0/2) Epoch 16, batch 0, loss[loss=0.08158, simple_loss=0.1352, pruned_loss=0.01396, over 24228.00 frames. ], tot_loss[loss=0.08158, simple_loss=0.1352, pruned_loss=0.01396, over 24228.00 frames. ], batch size: 328, lr: 1.80e-02, grad_scale: 32.0 2024-03-15 22:55:10,866 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 22:55:25,228 INFO [train_char.py:721] (0/2) Epoch 16, validation: loss=0.06804, simple_loss=0.1217, pruned_loss=0.007178, over 657665.00 frames. 2024-03-15 22:55:25,228 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 22:55:25,398 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.dropout.p, batch_count=25350.0, ans=0.1 2024-03-15 22:55:34,736 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.602e+01 9.561e+01 1.102e+02 1.272e+02 3.359e+02, threshold=2.205e+02, percent-clipped=1.0 2024-03-15 22:55:53,895 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=25416.666666666668, ans=0.1 2024-03-15 22:56:16,632 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass.scale_min, batch_count=25450.0, ans=0.2 2024-03-15 22:56:35,624 INFO [train_char.py:689] (0/2) Epoch 16, batch 50, loss[loss=0.08545, simple_loss=0.1332, pruned_loss=0.01887, over 24175.00 frames. ], tot_loss[loss=0.08344, simple_loss=0.1342, pruned_loss=0.01632, over 1082724.41 frames. ], batch size: 344, lr: 1.79e-02, grad_scale: 32.0 2024-03-15 22:56:40,197 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.out_combiner.scale_min, batch_count=25516.666666666668, ans=0.2 2024-03-15 22:56:42,829 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.balancer2.prob, batch_count=25516.666666666668, ans=0.125 2024-03-15 22:57:19,752 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module2.balancer1.prob, batch_count=25616.666666666668, ans=0.125 2024-03-15 22:57:46,066 INFO [train_char.py:689] (0/2) Epoch 16, batch 100, loss[loss=0.07021, simple_loss=0.1196, pruned_loss=0.01042, over 24258.00 frames. ], tot_loss[loss=0.08394, simple_loss=0.1354, pruned_loss=0.01626, over 1912494.48 frames. ], batch size: 140, lr: 1.79e-02, grad_scale: 32.0 2024-03-15 22:57:55,177 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.570e+01 1.059e+02 1.253e+02 1.673e+02 2.974e+02, threshold=2.506e+02, percent-clipped=12.0 2024-03-15 22:58:12,099 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-15 22:58:12,182 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=25750.0, ans=0.125 2024-03-15 22:58:14,909 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.attention_skip_rate, batch_count=25750.0, ans=0.0 2024-03-15 22:58:27,844 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer1.prob, batch_count=25783.333333333332, ans=0.125 2024-03-15 22:58:50,643 INFO [train_char.py:689] (0/2) Epoch 16, batch 150, loss[loss=0.06753, simple_loss=0.1141, pruned_loss=0.01047, over 24314.00 frames. ], tot_loss[loss=0.08329, simple_loss=0.1347, pruned_loss=0.01592, over 2556322.29 frames. ], batch size: 140, lr: 1.79e-02, grad_scale: 32.0 2024-03-15 22:59:07,301 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer2.prob, batch_count=25883.333333333332, ans=0.125 2024-03-15 22:59:13,734 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff2_skip_rate, batch_count=25883.333333333332, ans=0.005242753623188406 2024-03-15 22:59:24,249 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=25916.666666666668, ans=0.1 2024-03-15 22:59:41,754 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=8.57 vs. limit=10.0 2024-03-15 22:59:49,278 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=11.40 vs. limit=15.0 2024-03-15 22:59:49,871 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer1.prob, batch_count=25983.333333333332, ans=0.125 2024-03-15 22:59:59,102 INFO [train_char.py:689] (0/2) Epoch 16, batch 200, loss[loss=0.07613, simple_loss=0.1227, pruned_loss=0.01477, over 24151.00 frames. ], tot_loss[loss=0.08273, simple_loss=0.1341, pruned_loss=0.01566, over 3059693.28 frames. ], batch size: 362, lr: 1.79e-02, grad_scale: 32.0 2024-03-15 22:59:59,817 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.self_attn_weights.whiten_keys.whitening_limit, batch_count=26016.666666666668, ans=6.0 2024-03-15 23:00:07,868 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=13.05 vs. limit=15.0 2024-03-15 23:00:08,159 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.187e+01 1.070e+02 1.349e+02 1.623e+02 3.033e+02, threshold=2.699e+02, percent-clipped=5.0 2024-03-15 23:00:43,894 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass_mid.scale_min, batch_count=26116.666666666668, ans=0.2 2024-03-15 23:00:52,854 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module1.balancer1.prob, batch_count=26150.0, ans=0.125 2024-03-15 23:01:06,625 INFO [train_char.py:689] (0/2) Epoch 16, batch 250, loss[loss=0.09757, simple_loss=0.155, pruned_loss=0.02006, over 24099.00 frames. ], tot_loss[loss=0.08376, simple_loss=0.1358, pruned_loss=0.01585, over 3450139.57 frames. ], batch size: 199, lr: 1.78e-02, grad_scale: 16.0 2024-03-15 23:01:28,541 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_skip_rate, batch_count=26216.666666666668, ans=0.0 2024-03-15 23:01:36,260 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer1.prob, batch_count=26250.0, ans=0.125 2024-03-15 23:01:46,019 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer1.prob, batch_count=26283.333333333332, ans=0.125 2024-03-15 23:01:55,202 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer2.prob, batch_count=26283.333333333332, ans=0.125 2024-03-15 23:02:06,774 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.ff3_skip_rate, batch_count=26316.666666666668, ans=0.005148550724637681 2024-03-15 23:02:10,431 INFO [train_char.py:689] (0/2) Epoch 16, batch 300, loss[loss=0.08701, simple_loss=0.1417, pruned_loss=0.01619, over 24344.00 frames. ], tot_loss[loss=0.08482, simple_loss=0.137, pruned_loss=0.0163, over 3749663.15 frames. ], batch size: 180, lr: 1.78e-02, grad_scale: 8.0 2024-03-15 23:02:11,893 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.ff2_skip_rate, batch_count=26350.0, ans=0.0051413043478260876 2024-03-15 23:02:16,452 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.attention_skip_rate, batch_count=26350.0, ans=0.0 2024-03-15 23:02:20,316 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=26350.0, ans=0.1 2024-03-15 23:02:20,817 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=384, metric=19.68 vs. limit=22.5 2024-03-15 23:02:24,985 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.614e+01 9.908e+01 1.237e+02 1.661e+02 2.872e+02, threshold=2.474e+02, percent-clipped=1.0 2024-03-15 23:03:17,403 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=26516.666666666668, ans=0.0 2024-03-15 23:03:18,472 INFO [train_char.py:689] (0/2) Epoch 16, batch 350, loss[loss=0.08615, simple_loss=0.1377, pruned_loss=0.01728, over 24222.00 frames. ], tot_loss[loss=0.08505, simple_loss=0.1377, pruned_loss=0.01621, over 3991783.73 frames. ], batch size: 311, lr: 1.78e-02, grad_scale: 4.0 2024-03-15 23:03:23,694 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=26516.666666666668, ans=0.125 2024-03-15 23:04:15,811 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/checkpoint-8000.pt 2024-03-15 23:04:28,232 INFO [train_char.py:689] (0/2) Epoch 16, batch 400, loss[loss=0.07887, simple_loss=0.1291, pruned_loss=0.01432, over 24296.00 frames. ], tot_loss[loss=0.08536, simple_loss=0.1384, pruned_loss=0.01614, over 4180093.97 frames. ], batch size: 129, lr: 1.77e-02, grad_scale: 8.0 2024-03-15 23:04:40,594 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.718e+01 9.599e+01 1.139e+02 1.487e+02 2.327e+02, threshold=2.279e+02, percent-clipped=0.0 2024-03-15 23:04:43,357 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module2.balancer2.min_abs, batch_count=26716.666666666668, ans=0.5 2024-03-15 23:05:14,284 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module1.balancer1.max_abs, batch_count=26783.333333333332, ans=10.0 2024-03-15 23:05:17,837 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass.skip_rate, batch_count=26816.666666666668, ans=0.04949747468305833 2024-03-15 23:05:31,365 INFO [train_char.py:689] (0/2) Epoch 16, batch 450, loss[loss=0.08175, simple_loss=0.1294, pruned_loss=0.01705, over 24246.00 frames. ], tot_loss[loss=0.0862, simple_loss=0.1398, pruned_loss=0.01629, over 4326940.56 frames. ], batch size: 328, lr: 1.77e-02, grad_scale: 8.0 2024-03-15 23:05:39,035 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.3.self_attn_weights, loss-sum=0.000e+00 2024-03-15 23:05:40,221 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward2.hidden_balancer.prob, batch_count=26850.0, ans=0.125 2024-03-15 23:05:45,164 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.bypass_mid.scale_min, batch_count=26883.333333333332, ans=0.2 2024-03-15 23:05:51,354 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward2.hidden_balancer.prob, batch_count=26883.333333333332, ans=0.125 2024-03-15 23:06:26,228 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=26983.333333333332, ans=0.125 2024-03-15 23:06:36,223 INFO [train_char.py:689] (0/2) Epoch 16, batch 500, loss[loss=0.09501, simple_loss=0.15, pruned_loss=0.01998, over 24107.00 frames. ], tot_loss[loss=0.08719, simple_loss=0.1414, pruned_loss=0.01647, over 4438979.35 frames. ], batch size: 199, lr: 1.77e-02, grad_scale: 8.0 2024-03-15 23:06:40,661 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module1.whiten.whitening_limit, batch_count=27016.666666666668, ans=15.0 2024-03-15 23:06:45,126 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-16.pt 2024-03-15 23:07:37,100 INFO [train_char.py:689] (0/2) Epoch 17, batch 0, loss[loss=0.09617, simple_loss=0.1514, pruned_loss=0.02046, over 24044.00 frames. ], tot_loss[loss=0.09617, simple_loss=0.1514, pruned_loss=0.02046, over 24044.00 frames. ], batch size: 250, lr: 1.71e-02, grad_scale: 16.0 2024-03-15 23:07:37,101 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 23:07:50,915 INFO [train_char.py:721] (0/2) Epoch 17, validation: loss=0.06728, simple_loss=0.1211, pruned_loss=0.006709, over 657665.00 frames. 2024-03-15 23:07:50,916 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 23:07:56,238 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.930e+01 9.696e+01 1.142e+02 1.527e+02 2.129e+02, threshold=2.283e+02, percent-clipped=0.0 2024-03-15 23:08:03,331 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-15 23:08:19,400 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=27106.666666666668, ans=0.125 2024-03-15 23:08:20,747 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.hidden_balancer.prob, batch_count=27106.666666666668, ans=0.125 2024-03-15 23:08:46,772 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer2.prob, batch_count=27173.333333333332, ans=0.125 2024-03-15 23:09:06,000 INFO [train_char.py:689] (0/2) Epoch 17, batch 50, loss[loss=0.08269, simple_loss=0.1362, pruned_loss=0.01462, over 24204.00 frames. ], tot_loss[loss=0.08157, simple_loss=0.1347, pruned_loss=0.01423, over 1087774.35 frames. ], batch size: 311, lr: 1.71e-02, grad_scale: 8.0 2024-03-15 23:09:23,949 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.ff3_skip_rate, batch_count=27240.0, ans=0.004947826086956522 2024-03-15 23:09:44,753 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.attention_skip_rate, batch_count=27306.666666666668, ans=0.0 2024-03-15 23:09:57,096 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.balancer1.prob, batch_count=27306.666666666668, ans=0.125 2024-03-15 23:09:57,344 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.self_attn2.whiten, num_groups=1, num_channels=384, metric=13.52 vs. limit=22.5 2024-03-15 23:10:08,415 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=27340.0, ans=0.125 2024-03-15 23:10:12,028 INFO [train_char.py:689] (0/2) Epoch 17, batch 100, loss[loss=0.09499, simple_loss=0.153, pruned_loss=0.0185, over 24140.00 frames. ], tot_loss[loss=0.08263, simple_loss=0.1356, pruned_loss=0.01485, over 1913840.09 frames. ], batch size: 279, lr: 1.71e-02, grad_scale: 8.0 2024-03-15 23:10:16,895 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.353e+01 1.077e+02 1.241e+02 1.620e+02 2.640e+02, threshold=2.483e+02, percent-clipped=3.0 2024-03-15 23:10:30,155 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.balancer1.prob, batch_count=27406.666666666668, ans=0.125 2024-03-15 23:10:31,409 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass.scale_min, batch_count=27406.666666666668, ans=0.2 2024-03-15 23:10:43,666 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=14.66 vs. limit=15.0 2024-03-15 23:11:20,601 INFO [train_char.py:689] (0/2) Epoch 17, batch 150, loss[loss=0.06767, simple_loss=0.1151, pruned_loss=0.01014, over 24208.00 frames. ], tot_loss[loss=0.08277, simple_loss=0.1353, pruned_loss=0.0151, over 2554202.91 frames. ], batch size: 122, lr: 1.71e-02, grad_scale: 8.0 2024-03-15 23:11:33,550 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.ff2_skip_rate, batch_count=27573.333333333332, ans=0.00487536231884058 2024-03-15 23:11:39,595 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=192, metric=11.17 vs. limit=15.0 2024-03-15 23:11:58,626 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=27606.666666666668, ans=0.1 2024-03-15 23:12:15,593 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=27673.333333333332, ans=0.125 2024-03-15 23:12:19,509 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.self_attn_weights.pos_emb_skip_rate, batch_count=27673.333333333332, ans=0.0 2024-03-15 23:12:28,476 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.balancer.max_positive, batch_count=27706.666666666668, ans=0.95 2024-03-15 23:12:29,527 INFO [train_char.py:689] (0/2) Epoch 17, batch 200, loss[loss=0.09027, simple_loss=0.1437, pruned_loss=0.01843, over 24217.00 frames. ], tot_loss[loss=0.0818, simple_loss=0.1339, pruned_loss=0.01487, over 3052930.94 frames. ], batch size: 311, lr: 1.70e-02, grad_scale: 8.0 2024-03-15 23:12:34,635 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass_mid.scale_min, batch_count=27706.666666666668, ans=0.2 2024-03-15 23:12:35,568 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.192e+01 1.028e+02 1.235e+02 1.712e+02 3.164e+02, threshold=2.470e+02, percent-clipped=4.0 2024-03-15 23:12:39,724 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer1.prob, batch_count=27706.666666666668, ans=0.125 2024-03-15 23:12:49,923 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer1.max_abs, batch_count=27740.0, ans=10.0 2024-03-15 23:12:52,576 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=27740.0, ans=0.125 2024-03-15 23:12:58,784 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module1.balancer1.prob, batch_count=27773.333333333332, ans=0.125 2024-03-15 23:13:33,589 INFO [train_char.py:689] (0/2) Epoch 17, batch 250, loss[loss=0.0816, simple_loss=0.1362, pruned_loss=0.01348, over 24328.00 frames. ], tot_loss[loss=0.08218, simple_loss=0.1349, pruned_loss=0.01471, over 3446561.20 frames. ], batch size: 297, lr: 1.70e-02, grad_scale: 8.0 2024-03-15 23:13:36,407 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.prob, batch_count=27873.333333333332, ans=0.125 2024-03-15 23:13:44,043 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module1.balancer2.prob, batch_count=27873.333333333332, ans=0.125 2024-03-15 23:14:15,996 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward3.hidden_balancer.prob, batch_count=27973.333333333332, ans=0.125 2024-03-15 23:14:43,989 INFO [train_char.py:689] (0/2) Epoch 17, batch 300, loss[loss=0.1069, simple_loss=0.1664, pruned_loss=0.02369, over 24079.00 frames. ], tot_loss[loss=0.08253, simple_loss=0.1353, pruned_loss=0.01486, over 3748710.03 frames. ], batch size: 223, lr: 1.70e-02, grad_scale: 8.0 2024-03-15 23:14:50,412 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.479e+01 9.549e+01 1.144e+02 1.473e+02 3.101e+02, threshold=2.289e+02, percent-clipped=3.0 2024-03-15 23:14:58,443 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=28073.333333333332, ans=0.125 2024-03-15 23:15:05,866 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.whiten, num_groups=1, num_channels=192, metric=4.31 vs. limit=12.0 2024-03-15 23:15:25,151 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.27 vs. limit=6.0 2024-03-15 23:15:29,577 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module1.balancer1.prob, batch_count=28140.0, ans=0.125 2024-03-15 23:15:48,567 INFO [train_char.py:689] (0/2) Epoch 17, batch 350, loss[loss=0.09812, simple_loss=0.1525, pruned_loss=0.02185, over 24136.00 frames. ], tot_loss[loss=0.08256, simple_loss=0.1355, pruned_loss=0.0148, over 3986978.80 frames. ], batch size: 279, lr: 1.69e-02, grad_scale: 8.0 2024-03-15 23:16:09,752 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer1.prob, batch_count=28240.0, ans=0.125 2024-03-15 23:16:19,365 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=16.15 vs. limit=15.0 2024-03-15 23:16:27,620 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_skip_rate, batch_count=28306.666666666668, ans=0.0 2024-03-15 23:16:55,523 INFO [train_char.py:689] (0/2) Epoch 17, batch 400, loss[loss=0.08809, simple_loss=0.1498, pruned_loss=0.01317, over 24100.00 frames. ], tot_loss[loss=0.08304, simple_loss=0.1366, pruned_loss=0.01476, over 4176494.45 frames. ], batch size: 199, lr: 1.69e-02, grad_scale: 16.0 2024-03-15 23:17:01,979 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.098e+01 9.793e+01 1.243e+02 1.636e+02 2.677e+02, threshold=2.485e+02, percent-clipped=8.0 2024-03-15 23:17:06,090 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.bypass_mid.scale_min, batch_count=28373.333333333332, ans=0.2 2024-03-15 23:17:06,113 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff3_skip_rate, batch_count=28373.333333333332, ans=0.00470144927536232 2024-03-15 23:17:16,280 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=28406.666666666668, ans=0.125 2024-03-15 23:17:17,420 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=28406.666666666668, ans=0.1 2024-03-15 23:17:22,565 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer2.prob, batch_count=28440.0, ans=0.125 2024-03-15 23:17:38,768 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.nonlin_attention.balancer.prob, batch_count=28473.333333333332, ans=0.125 2024-03-15 23:17:53,936 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.self_attn2.whiten, num_groups=1, num_channels=512, metric=13.43 vs. limit=22.5 2024-03-15 23:18:00,841 INFO [train_char.py:689] (0/2) Epoch 17, batch 450, loss[loss=0.09319, simple_loss=0.1501, pruned_loss=0.01816, over 24147.00 frames. ], tot_loss[loss=0.08379, simple_loss=0.1375, pruned_loss=0.01504, over 4322397.63 frames. ], batch size: 188, lr: 1.69e-02, grad_scale: 16.0 2024-03-15 23:18:02,412 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=28540.0, ans=0.125 2024-03-15 23:18:14,428 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=28573.333333333332, ans=0.0 2024-03-15 23:18:24,746 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=13.12 vs. limit=15.0 2024-03-15 23:18:47,100 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-15 23:19:04,812 INFO [train_char.py:689] (0/2) Epoch 17, batch 500, loss[loss=0.1, simple_loss=0.1567, pruned_loss=0.02169, over 24183.00 frames. ], tot_loss[loss=0.08518, simple_loss=0.1398, pruned_loss=0.01527, over 4435774.22 frames. ], batch size: 224, lr: 1.69e-02, grad_scale: 16.0 2024-03-15 23:19:11,111 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.875e+01 9.948e+01 1.161e+02 1.379e+02 4.396e+02, threshold=2.322e+02, percent-clipped=2.0 2024-03-15 23:19:11,328 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.bypass.scale_min, batch_count=28706.666666666668, ans=0.2 2024-03-15 23:19:13,727 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-17.pt 2024-03-15 23:20:08,756 INFO [train_char.py:689] (0/2) Epoch 18, batch 0, loss[loss=0.0814, simple_loss=0.1367, pruned_loss=0.01305, over 24144.00 frames. ], tot_loss[loss=0.0814, simple_loss=0.1367, pruned_loss=0.01305, over 24144.00 frames. ], batch size: 188, lr: 1.64e-02, grad_scale: 32.0 2024-03-15 23:20:08,757 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 23:20:22,466 INFO [train_char.py:721] (0/2) Epoch 18, validation: loss=0.06657, simple_loss=0.1202, pruned_loss=0.006462, over 657665.00 frames. 2024-03-15 23:20:22,467 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 23:20:42,855 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer1.prob, batch_count=28763.333333333332, ans=0.125 2024-03-15 23:20:58,006 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=192, metric=11.12 vs. limit=15.0 2024-03-15 23:21:06,493 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass.scale_min, batch_count=28796.666666666668, ans=0.2 2024-03-15 23:21:19,973 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_skip_rate, batch_count=28830.0, ans=0.0 2024-03-15 23:21:35,152 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=384, metric=18.54 vs. limit=22.5 2024-03-15 23:21:37,048 INFO [train_char.py:689] (0/2) Epoch 18, batch 50, loss[loss=0.0888, simple_loss=0.1409, pruned_loss=0.01834, over 24412.00 frames. ], tot_loss[loss=0.07843, simple_loss=0.1292, pruned_loss=0.01381, over 1080474.75 frames. ], batch size: 165, lr: 1.63e-02, grad_scale: 32.0 2024-03-15 23:21:37,468 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_skip_rate, batch_count=28896.666666666668, ans=0.0 2024-03-15 23:21:40,729 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.conv_module2.whiten, num_groups=1, num_channels=384, metric=5.83 vs. limit=15.0 2024-03-15 23:21:43,431 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.29 vs. limit=6.0 2024-03-15 23:21:52,799 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.53 vs. limit=6.0 2024-03-15 23:22:09,631 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=11.67 vs. limit=15.0 2024-03-15 23:22:44,853 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=29030.0, ans=0.125 2024-03-15 23:22:46,257 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-15 23:22:47,203 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.162e+01 1.018e+02 1.249e+02 1.654e+02 4.023e+02, threshold=2.497e+02, percent-clipped=11.0 2024-03-15 23:22:48,602 INFO [train_char.py:689] (0/2) Epoch 18, batch 100, loss[loss=0.07538, simple_loss=0.1175, pruned_loss=0.01662, over 24013.00 frames. ], tot_loss[loss=0.07909, simple_loss=0.1296, pruned_loss=0.01428, over 1901868.69 frames. ], batch size: 381, lr: 1.63e-02, grad_scale: 16.0 2024-03-15 23:23:08,703 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=8.53 vs. limit=15.0 2024-03-15 23:23:26,663 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff2_skip_rate, batch_count=29130.0, ans=0.00453695652173913 2024-03-15 23:23:29,391 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module1.balancer2.min_abs, batch_count=29130.0, ans=0.5 2024-03-15 23:23:57,574 INFO [train_char.py:689] (0/2) Epoch 18, batch 150, loss[loss=0.06789, simple_loss=0.1124, pruned_loss=0.0117, over 23987.00 frames. ], tot_loss[loss=0.07894, simple_loss=0.1301, pruned_loss=0.0139, over 2541169.13 frames. ], batch size: 408, lr: 1.63e-02, grad_scale: 16.0 2024-03-15 23:24:04,187 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=29230.0, ans=0.0 2024-03-15 23:24:21,944 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.self_attn_weights.pos_emb_skip_rate, batch_count=29296.666666666668, ans=0.0 2024-03-15 23:24:24,308 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=29296.666666666668, ans=0.125 2024-03-15 23:24:36,208 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff2_skip_rate, batch_count=29330.0, ans=0.004493478260869565 2024-03-15 23:24:55,626 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.balancer.max_positive, batch_count=29363.333333333332, ans=0.95 2024-03-15 23:25:00,566 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.073e+01 1.016e+02 1.234e+02 1.645e+02 2.905e+02, threshold=2.468e+02, percent-clipped=3.0 2024-03-15 23:25:01,211 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.conv_module2.whiten, num_groups=1, num_channels=384, metric=8.11 vs. limit=15.0 2024-03-15 23:25:01,772 INFO [train_char.py:689] (0/2) Epoch 18, batch 200, loss[loss=0.07936, simple_loss=0.1306, pruned_loss=0.01404, over 24367.00 frames. ], tot_loss[loss=0.0792, simple_loss=0.1308, pruned_loss=0.0138, over 3042301.66 frames. ], batch size: 152, lr: 1.63e-02, grad_scale: 16.0 2024-03-15 23:25:02,581 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=256, metric=19.44 vs. limit=22.5 2024-03-15 23:26:03,099 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.self_attn_weights.pos_emb_skip_rate, batch_count=29530.0, ans=0.0 2024-03-15 23:26:04,277 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=29530.0, ans=0.1 2024-03-15 23:26:04,323 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.ff2_skip_rate, batch_count=29530.0, ans=0.00445 2024-03-15 23:26:09,124 INFO [train_char.py:689] (0/2) Epoch 18, batch 250, loss[loss=0.07933, simple_loss=0.1341, pruned_loss=0.01227, over 24390.00 frames. ], tot_loss[loss=0.07898, simple_loss=0.1311, pruned_loss=0.01345, over 3436978.17 frames. ], batch size: 180, lr: 1.62e-02, grad_scale: 8.0 2024-03-15 23:26:39,424 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module1.balancer1.prob, batch_count=29630.0, ans=0.125 2024-03-15 23:26:43,071 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=29630.0, ans=0.0 2024-03-15 23:26:45,726 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.ff3_skip_rate, batch_count=29630.0, ans=0.004428260869565217 2024-03-15 23:26:56,960 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer2.min_positive, batch_count=29663.333333333332, ans=0.05 2024-03-15 23:27:04,444 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module2.balancer2.min_positive, batch_count=29696.666666666668, ans=0.05 2024-03-15 23:27:15,688 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.799e+01 9.910e+01 1.200e+02 1.468e+02 2.725e+02, threshold=2.399e+02, percent-clipped=2.0 2024-03-15 23:27:15,716 INFO [train_char.py:689] (0/2) Epoch 18, batch 300, loss[loss=0.07474, simple_loss=0.1232, pruned_loss=0.01315, over 24142.00 frames. ], tot_loss[loss=0.07899, simple_loss=0.1312, pruned_loss=0.01339, over 3747828.89 frames. ], batch size: 344, lr: 1.62e-02, grad_scale: 8.0 2024-03-15 23:27:15,956 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.balancer_ff2.min_abs, batch_count=29730.0, ans=0.1 2024-03-15 23:27:28,847 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.balancer1.prob, batch_count=29763.333333333332, ans=0.125 2024-03-15 23:27:39,693 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=192, metric=7.55 vs. limit=15.0 2024-03-15 23:27:54,285 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer2.prob, batch_count=29796.666666666668, ans=0.125 2024-03-15 23:27:59,804 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=384, metric=8.61 vs. limit=15.0 2024-03-15 23:28:21,607 INFO [train_char.py:689] (0/2) Epoch 18, batch 350, loss[loss=0.08173, simple_loss=0.1305, pruned_loss=0.01645, over 24232.00 frames. ], tot_loss[loss=0.08054, simple_loss=0.1335, pruned_loss=0.01377, over 3989673.48 frames. ], batch size: 328, lr: 1.62e-02, grad_scale: 8.0 2024-03-15 23:28:24,358 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff3_skip_rate, batch_count=29896.666666666668, ans=0.004370289855072464 2024-03-15 23:28:42,859 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module2.balancer2.prob, batch_count=29930.0, ans=0.125 2024-03-15 23:28:50,890 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.self_attn2.whiten, num_groups=1, num_channels=512, metric=13.89 vs. limit=22.5 2024-03-15 23:29:22,141 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.balancer.max_positive, batch_count=30030.0, ans=0.95 2024-03-15 23:29:26,849 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.638e+01 9.578e+01 1.055e+02 1.386e+02 2.453e+02, threshold=2.109e+02, percent-clipped=1.0 2024-03-15 23:29:26,883 INFO [train_char.py:689] (0/2) Epoch 18, batch 400, loss[loss=0.0871, simple_loss=0.1416, pruned_loss=0.0163, over 24131.00 frames. ], tot_loss[loss=0.08061, simple_loss=0.1339, pruned_loss=0.01364, over 4178791.83 frames. ], batch size: 279, lr: 1.62e-02, grad_scale: 16.0 2024-03-15 23:30:01,370 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass.scale_min, batch_count=30130.0, ans=0.2 2024-03-15 23:30:10,364 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.self_attn2.whiten, num_groups=1, num_channels=512, metric=12.41 vs. limit=22.5 2024-03-15 23:30:32,011 INFO [train_char.py:689] (0/2) Epoch 18, batch 450, loss[loss=0.09459, simple_loss=0.1552, pruned_loss=0.01701, over 24215.00 frames. ], tot_loss[loss=0.08155, simple_loss=0.1354, pruned_loss=0.01385, over 4326638.07 frames. ], batch size: 212, lr: 1.61e-02, grad_scale: 8.0 2024-03-15 23:30:57,606 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.balancer1.prob, batch_count=30296.666666666668, ans=0.125 2024-03-15 23:31:00,046 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass_mid.scale_min, batch_count=30296.666666666668, ans=0.2 2024-03-15 23:31:23,592 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.skip_rate, batch_count=30363.333333333332, ans=0.04949747468305833 2024-03-15 23:31:37,500 INFO [train_char.py:689] (0/2) Epoch 18, batch 500, loss[loss=0.09812, simple_loss=0.1601, pruned_loss=0.01808, over 24089.00 frames. ], tot_loss[loss=0.0828, simple_loss=0.1374, pruned_loss=0.01412, over 4439749.64 frames. ], batch size: 236, lr: 1.61e-02, grad_scale: 8.0 2024-03-15 23:31:38,761 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.402e+01 8.967e+01 1.019e+02 1.195e+02 2.230e+02, threshold=2.038e+02, percent-clipped=1.0 2024-03-15 23:31:43,875 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass.scale_min, batch_count=30396.666666666668, ans=0.2 2024-03-15 23:31:46,743 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-18.pt 2024-03-15 23:32:38,189 INFO [train_char.py:689] (0/2) Epoch 19, batch 0, loss[loss=0.0735, simple_loss=0.1174, pruned_loss=0.01479, over 24194.00 frames. ], tot_loss[loss=0.0735, simple_loss=0.1174, pruned_loss=0.01479, over 24194.00 frames. ], batch size: 344, lr: 1.57e-02, grad_scale: 16.0 2024-03-15 23:32:38,190 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 23:32:45,938 INFO [zipformer.py:1858] (0/2) name=encoder.encoders.0.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.5426, 5.6387, 5.5385, 5.2981], device='cuda:0') 2024-03-15 23:32:51,651 INFO [train_char.py:721] (0/2) Epoch 19, validation: loss=0.06498, simple_loss=0.1179, pruned_loss=0.006013, over 657665.00 frames. 2024-03-15 23:32:51,651 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 23:32:53,284 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module2.balancer2.prob, batch_count=30420.0, ans=0.125 2024-03-15 23:32:56,606 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=192, metric=17.34 vs. limit=15.0 2024-03-15 23:33:06,753 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=30453.333333333332, ans=0.125 2024-03-15 23:33:21,179 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.nonlin_attention.balancer.prob, batch_count=30453.333333333332, ans=0.125 2024-03-15 23:33:22,597 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer2.prob, batch_count=30486.666666666668, ans=0.125 2024-03-15 23:33:26,968 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.self_attn2.whiten, num_groups=1, num_channels=384, metric=19.09 vs. limit=22.5 2024-03-15 23:33:46,976 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=30520.0, ans=0.0 2024-03-15 23:33:50,989 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=30553.333333333332, ans=0.1 2024-03-15 23:33:53,815 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-15 23:34:06,497 INFO [train_char.py:689] (0/2) Epoch 19, batch 50, loss[loss=0.09104, simple_loss=0.1515, pruned_loss=0.0153, over 24244.00 frames. ], tot_loss[loss=0.07789, simple_loss=0.1311, pruned_loss=0.01236, over 1086388.53 frames. ], batch size: 212, lr: 1.56e-02, grad_scale: 8.0 2024-03-15 23:34:09,441 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer2.prob, batch_count=30586.666666666668, ans=0.125 2024-03-15 23:34:14,191 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=6.42 vs. limit=15.0 2024-03-15 23:34:26,261 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.whiten, num_groups=1, num_channels=512, metric=9.44 vs. limit=12.0 2024-03-15 23:34:33,650 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_skip_rate, batch_count=30653.333333333332, ans=0.0 2024-03-15 23:34:48,253 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_skip_rate, batch_count=30686.666666666668, ans=0.0 2024-03-15 23:34:49,550 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=30686.666666666668, ans=0.1 2024-03-15 23:34:52,461 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=10.61 vs. limit=15.0 2024-03-15 23:34:54,893 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass_mid.scale_min, batch_count=30686.666666666668, ans=0.2 2024-03-15 23:35:06,093 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.255e+01 1.088e+02 1.285e+02 1.838e+02 3.360e+02, threshold=2.569e+02, percent-clipped=17.0 2024-03-15 23:35:12,621 INFO [train_char.py:689] (0/2) Epoch 19, batch 100, loss[loss=0.0775, simple_loss=0.1285, pruned_loss=0.01324, over 24236.00 frames. ], tot_loss[loss=0.07762, simple_loss=0.1308, pruned_loss=0.0122, over 1918576.59 frames. ], batch size: 328, lr: 1.56e-02, grad_scale: 8.0 2024-03-15 23:35:16,731 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_skip_rate, batch_count=30753.333333333332, ans=0.0 2024-03-15 23:35:18,211 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=15.22 vs. limit=15.0 2024-03-15 23:35:24,925 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=192, metric=9.34 vs. limit=15.0 2024-03-15 23:35:34,821 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.self_attn2.whiten, num_groups=1, num_channels=512, metric=11.81 vs. limit=22.5 2024-03-15 23:35:47,478 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=7.82 vs. limit=10.0 2024-03-15 23:36:06,107 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=30853.333333333332, ans=0.1 2024-03-15 23:36:21,127 INFO [train_char.py:689] (0/2) Epoch 19, batch 150, loss[loss=0.08398, simple_loss=0.1415, pruned_loss=0.01323, over 24153.00 frames. ], tot_loss[loss=0.07714, simple_loss=0.1301, pruned_loss=0.01207, over 2561985.13 frames. ], batch size: 188, lr: 1.56e-02, grad_scale: 8.0 2024-03-15 23:36:31,485 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=30920.0, ans=0.1 2024-03-15 23:36:53,844 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=384, metric=14.98 vs. limit=22.5 2024-03-15 23:37:13,821 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.attention_skip_rate, batch_count=31020.0, ans=0.0 2024-03-15 23:37:15,642 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.whiten, num_groups=1, num_channels=512, metric=7.12 vs. limit=12.0 2024-03-15 23:37:16,434 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module2.balancer2.min_positive, batch_count=31053.333333333332, ans=0.05 2024-03-15 23:37:19,254 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_skip_rate, batch_count=31053.333333333332, ans=0.0 2024-03-15 23:37:22,778 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.585e+01 9.639e+01 1.177e+02 1.532e+02 2.685e+02, threshold=2.354e+02, percent-clipped=1.0 2024-03-15 23:37:29,446 INFO [train_char.py:689] (0/2) Epoch 19, batch 200, loss[loss=0.07422, simple_loss=0.1246, pruned_loss=0.01192, over 24160.00 frames. ], tot_loss[loss=0.07718, simple_loss=0.1297, pruned_loss=0.01232, over 3057519.75 frames. ], batch size: 344, lr: 1.56e-02, grad_scale: 8.0 2024-03-15 23:37:30,255 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=384, metric=23.02 vs. limit=22.5 2024-03-15 23:37:33,996 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=15.83 vs. limit=15.0 2024-03-15 23:37:38,753 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.ff3_skip_rate, batch_count=31086.666666666668, ans=0.004111594202898551 2024-03-15 23:37:43,874 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module1.balancer2.prob, batch_count=31120.0, ans=0.125 2024-03-15 23:37:50,253 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=31120.0, ans=0.1 2024-03-15 23:38:10,751 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module2.balancer1.prob, batch_count=31186.666666666668, ans=0.125 2024-03-15 23:38:10,849 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward3.hidden_balancer.prob, batch_count=31186.666666666668, ans=0.125 2024-03-15 23:38:17,162 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module1.balancer1.min_positive, batch_count=31186.666666666668, ans=0.025 2024-03-15 23:38:20,903 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=31220.0, ans=0.1 2024-03-15 23:38:22,050 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward2.hidden_balancer.prob, batch_count=31220.0, ans=0.125 2024-03-15 23:38:24,768 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.out_combiner.scale_min, batch_count=31220.0, ans=0.2 2024-03-15 23:38:26,999 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=31220.0, ans=0.0 2024-03-15 23:38:33,166 INFO [train_char.py:689] (0/2) Epoch 19, batch 250, loss[loss=0.0891, simple_loss=0.1503, pruned_loss=0.01393, over 24194.00 frames. ], tot_loss[loss=0.07666, simple_loss=0.1289, pruned_loss=0.01222, over 3446012.13 frames. ], batch size: 279, lr: 1.55e-02, grad_scale: 8.0 2024-03-15 23:38:52,308 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module1.balancer2.prob, batch_count=31286.666666666668, ans=0.125 2024-03-15 23:39:02,240 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_skip_rate, batch_count=31320.0, ans=0.0 2024-03-15 23:39:37,994 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.855e+01 9.459e+01 1.088e+02 1.598e+02 2.803e+02, threshold=2.176e+02, percent-clipped=3.0 2024-03-15 23:39:44,202 INFO [train_char.py:689] (0/2) Epoch 19, batch 300, loss[loss=0.08055, simple_loss=0.1365, pruned_loss=0.0123, over 24267.00 frames. ], tot_loss[loss=0.07739, simple_loss=0.1298, pruned_loss=0.01248, over 3749630.09 frames. ], batch size: 296, lr: 1.55e-02, grad_scale: 8.0 2024-03-15 23:39:45,760 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module1.balancer2.prob, batch_count=31420.0, ans=0.125 2024-03-15 23:39:47,025 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer2.prob, batch_count=31420.0, ans=0.125 2024-03-15 23:39:51,932 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer1.prob, batch_count=31420.0, ans=0.125 2024-03-15 23:40:04,599 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass.scale_min, batch_count=31453.333333333332, ans=0.2 2024-03-15 23:40:05,123 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=384, metric=7.69 vs. limit=15.0 2024-03-15 23:40:12,367 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=31486.666666666668, ans=0.125 2024-03-15 23:40:45,785 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=15.99 vs. limit=15.0 2024-03-15 23:40:49,925 INFO [train_char.py:689] (0/2) Epoch 19, batch 350, loss[loss=0.09063, simple_loss=0.1523, pruned_loss=0.01446, over 24085.00 frames. ], tot_loss[loss=0.07816, simple_loss=0.1315, pruned_loss=0.01243, over 3990902.61 frames. ], batch size: 223, lr: 1.55e-02, grad_scale: 8.0 2024-03-15 23:40:50,830 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=12.93 vs. limit=15.0 2024-03-15 23:40:51,484 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer2.prob, batch_count=31586.666666666668, ans=0.125 2024-03-15 23:40:58,146 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.57 vs. limit=6.0 2024-03-15 23:41:02,106 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=8, num_channels=256, metric=5.65 vs. limit=6.0 2024-03-15 23:41:09,321 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=31620.0, ans=0.1 2024-03-15 23:41:38,131 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.ff3_skip_rate, batch_count=31686.666666666668, ans=0.003981159420289855 2024-03-15 23:41:40,636 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.balancer2.prob, batch_count=31686.666666666668, ans=0.125 2024-03-15 23:41:46,153 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=4.95 vs. limit=6.0 2024-03-15 23:41:47,481 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=384, metric=20.54 vs. limit=22.5 2024-03-15 23:41:49,266 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.223e+01 9.840e+01 1.222e+02 1.502e+02 3.919e+02, threshold=2.443e+02, percent-clipped=4.0 2024-03-15 23:41:55,558 INFO [train_char.py:689] (0/2) Epoch 19, batch 400, loss[loss=0.08749, simple_loss=0.1502, pruned_loss=0.01241, over 24035.00 frames. ], tot_loss[loss=0.07905, simple_loss=0.1328, pruned_loss=0.01265, over 4179542.28 frames. ], batch size: 236, lr: 1.55e-02, grad_scale: 16.0 2024-03-15 23:41:58,378 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.attention_skip_rate, batch_count=31753.333333333332, ans=0.0 2024-03-15 23:41:58,409 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_skip_rate, batch_count=31753.333333333332, ans=0.0 2024-03-15 23:42:07,225 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer1.prob, batch_count=31786.666666666668, ans=0.125 2024-03-15 23:42:12,112 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.balancer2.prob, batch_count=31786.666666666668, ans=0.125 2024-03-15 23:42:18,394 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer2.prob, batch_count=31786.666666666668, ans=0.125 2024-03-15 23:42:53,938 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=31886.666666666668, ans=0.1 2024-03-15 23:42:55,222 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer1.prob, batch_count=31886.666666666668, ans=0.125 2024-03-15 23:43:01,154 INFO [train_char.py:689] (0/2) Epoch 19, batch 450, loss[loss=0.09304, simple_loss=0.1525, pruned_loss=0.0168, over 24016.00 frames. ], tot_loss[loss=0.08057, simple_loss=0.135, pruned_loss=0.01305, over 4326399.24 frames. ], batch size: 250, lr: 1.54e-02, grad_scale: 16.0 2024-03-15 23:43:02,638 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_skip_rate, batch_count=31920.0, ans=0.0 2024-03-15 23:43:24,544 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=31953.333333333332, ans=0.0 2024-03-15 23:43:42,029 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module1.balancer1.prob, batch_count=32020.0, ans=0.125 2024-03-15 23:43:42,052 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=32020.0, ans=0.125 2024-03-15 23:43:43,218 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=32020.0, ans=0.0 2024-03-15 23:43:59,845 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.543e+01 9.840e+01 1.222e+02 1.592e+02 3.029e+02, threshold=2.443e+02, percent-clipped=4.0 2024-03-15 23:44:06,293 INFO [train_char.py:689] (0/2) Epoch 19, batch 500, loss[loss=0.09473, simple_loss=0.1578, pruned_loss=0.01585, over 24137.00 frames. ], tot_loss[loss=0.08173, simple_loss=0.137, pruned_loss=0.01325, over 4437773.62 frames. ], batch size: 223, lr: 1.54e-02, grad_scale: 16.0 2024-03-15 23:44:15,224 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-19.pt 2024-03-15 23:45:06,791 INFO [train_char.py:689] (0/2) Epoch 20, batch 0, loss[loss=0.07975, simple_loss=0.1365, pruned_loss=0.0115, over 24395.00 frames. ], tot_loss[loss=0.07975, simple_loss=0.1365, pruned_loss=0.0115, over 24395.00 frames. ], batch size: 152, lr: 1.50e-02, grad_scale: 32.0 2024-03-15 23:45:06,792 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 23:45:14,089 INFO [zipformer.py:1858] (0/2) name=encoder.encoders.4.encoder.layers.2.self_attn_weights, attn_weights_entropy = tensor([2.0406, 2.5043, 2.5641, 2.3569], device='cuda:0') 2024-03-15 23:45:19,549 INFO [zipformer.py:1858] (0/2) name=encoder.encoders.4.encoder.layers.2.self_attn_weights, attn_weights_entropy = tensor([2.1937, 2.7430, 2.2687, 2.3139], device='cuda:0') 2024-03-15 23:45:20,706 INFO [train_char.py:721] (0/2) Epoch 20, validation: loss=0.06562, simple_loss=0.1193, pruned_loss=0.00597, over 657665.00 frames. 2024-03-15 23:45:20,707 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 23:45:33,773 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.balancer1.prob, batch_count=32110.0, ans=0.125 2024-03-15 23:45:36,890 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=512, metric=4.12 vs. limit=15.0 2024-03-15 23:46:14,799 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=32210.0, ans=0.1 2024-03-15 23:46:29,705 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=32243.333333333332, ans=0.1 2024-03-15 23:46:35,342 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=32243.333333333332, ans=0.0 2024-03-15 23:46:37,764 INFO [train_char.py:689] (0/2) Epoch 20, batch 50, loss[loss=0.07838, simple_loss=0.136, pruned_loss=0.01036, over 24328.00 frames. ], tot_loss[loss=0.07588, simple_loss=0.129, pruned_loss=0.01136, over 1085233.19 frames. ], batch size: 180, lr: 1.50e-02, grad_scale: 16.0 2024-03-15 23:46:50,283 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.scale_min, batch_count=32310.0, ans=0.2 2024-03-15 23:47:18,976 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.attention_skip_rate, batch_count=32376.666666666668, ans=0.0 2024-03-15 23:47:34,780 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.077e+01 1.038e+02 1.248e+02 1.651e+02 2.797e+02, threshold=2.495e+02, percent-clipped=4.0 2024-03-15 23:47:37,484 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=32410.0, ans=0.125 2024-03-15 23:47:48,938 INFO [train_char.py:689] (0/2) Epoch 20, batch 100, loss[loss=0.0751, simple_loss=0.1285, pruned_loss=0.01087, over 24168.00 frames. ], tot_loss[loss=0.07664, simple_loss=0.1296, pruned_loss=0.01183, over 1905158.84 frames. ], batch size: 344, lr: 1.50e-02, grad_scale: 16.0 2024-03-15 23:47:54,440 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff3_skip_rate, batch_count=32443.333333333332, ans=0.003816666666666667 2024-03-15 23:48:26,214 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=32510.0, ans=0.1 2024-03-15 23:48:44,286 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=32543.333333333332, ans=0.0 2024-03-15 23:48:48,257 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.bypass.scale_min, batch_count=32576.666666666668, ans=0.2 2024-03-15 23:48:59,274 INFO [train_char.py:689] (0/2) Epoch 20, batch 150, loss[loss=0.09492, simple_loss=0.156, pruned_loss=0.01693, over 24163.00 frames. ], tot_loss[loss=0.07619, simple_loss=0.1286, pruned_loss=0.01189, over 2549108.02 frames. ], batch size: 251, lr: 1.49e-02, grad_scale: 16.0 2024-03-15 23:49:08,311 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module2.balancer2.prob, batch_count=32610.0, ans=0.125 2024-03-15 23:49:09,542 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.ff2_skip_rate, batch_count=32610.0, ans=0.0037804347826086958 2024-03-15 23:49:16,660 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=256, metric=4.91 vs. limit=15.0 2024-03-15 23:49:17,775 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=512, metric=11.46 vs. limit=15.0 2024-03-15 23:49:30,422 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=32676.666666666668, ans=0.125 2024-03-15 23:49:35,031 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=512, metric=20.00 vs. limit=22.5 2024-03-15 23:49:40,321 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=4.68 vs. limit=6.0 2024-03-15 23:49:46,630 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=384, metric=21.98 vs. limit=22.5 2024-03-15 23:49:51,016 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.740e+01 9.891e+01 1.306e+02 1.570e+02 2.698e+02, threshold=2.611e+02, percent-clipped=2.0 2024-03-15 23:50:03,631 INFO [train_char.py:689] (0/2) Epoch 20, batch 200, loss[loss=0.08524, simple_loss=0.1439, pruned_loss=0.0133, over 24128.00 frames. ], tot_loss[loss=0.07589, simple_loss=0.128, pruned_loss=0.0119, over 3052660.52 frames. ], batch size: 188, lr: 1.49e-02, grad_scale: 8.0 2024-03-15 23:50:04,333 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.whiten, num_groups=1, num_channels=512, metric=6.86 vs. limit=12.0 2024-03-15 23:50:21,120 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=192, metric=7.46 vs. limit=15.0 2024-03-15 23:50:36,197 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=13.80 vs. limit=15.0 2024-03-15 23:50:40,161 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=144, metric=6.98 vs. limit=10.0 2024-03-15 23:50:43,694 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=384, metric=4.94 vs. limit=15.0 2024-03-15 23:50:54,535 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=32876.666666666664, ans=0.1 2024-03-15 23:50:56,971 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass.skip_rate, batch_count=32910.0, ans=0.035 2024-03-15 23:50:56,981 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer2.min_abs, batch_count=32910.0, ans=0.5 2024-03-15 23:51:00,767 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer2.prob, batch_count=32910.0, ans=0.125 2024-03-15 23:51:07,240 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module1.balancer1.max_abs, batch_count=32910.0, ans=10.0 2024-03-15 23:51:09,729 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=32943.333333333336, ans=0.125 2024-03-15 23:51:10,663 INFO [train_char.py:689] (0/2) Epoch 20, batch 250, loss[loss=0.075, simple_loss=0.1277, pruned_loss=0.01117, over 24366.00 frames. ], tot_loss[loss=0.07577, simple_loss=0.1278, pruned_loss=0.01189, over 3445012.94 frames. ], batch size: 172, lr: 1.49e-02, grad_scale: 8.0 2024-03-15 23:51:12,234 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer1.prob, batch_count=32943.333333333336, ans=0.125 2024-03-15 23:51:27,323 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=32976.666666666664, ans=0.125 2024-03-15 23:51:56,402 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer1.prob, batch_count=33043.333333333336, ans=0.125 2024-03-15 23:51:57,710 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.ff2_skip_rate, batch_count=33043.333333333336, ans=0.003686231884057971 2024-03-15 23:51:58,987 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=33043.333333333336, ans=0.125 2024-03-15 23:52:05,093 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.317e+01 9.646e+01 1.177e+02 1.466e+02 2.336e+02, threshold=2.353e+02, percent-clipped=0.0 2024-03-15 23:52:18,066 INFO [train_char.py:689] (0/2) Epoch 20, batch 300, loss[loss=0.0668, simple_loss=0.112, pruned_loss=0.01079, over 24020.00 frames. ], tot_loss[loss=0.07594, simple_loss=0.1281, pruned_loss=0.01187, over 3753524.44 frames. ], batch size: 381, lr: 1.49e-02, grad_scale: 8.0 2024-03-15 23:52:28,795 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.60 vs. limit=6.0 2024-03-15 23:52:49,396 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer2.prob, batch_count=33176.666666666664, ans=0.125 2024-03-15 23:52:55,827 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=11.67 vs. limit=15.0 2024-03-15 23:53:25,170 INFO [train_char.py:689] (0/2) Epoch 20, batch 350, loss[loss=0.08346, simple_loss=0.1382, pruned_loss=0.01438, over 24325.00 frames. ], tot_loss[loss=0.07696, simple_loss=0.1298, pruned_loss=0.01204, over 3994957.64 frames. ], batch size: 297, lr: 1.48e-02, grad_scale: 8.0 2024-03-15 23:53:44,035 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass.skip_rate, batch_count=33310.0, ans=0.035 2024-03-15 23:53:46,139 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.conv_module1.whiten, num_groups=1, num_channels=384, metric=3.45 vs. limit=15.0 2024-03-15 23:53:48,195 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=33310.0, ans=0.1 2024-03-15 23:54:03,427 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer1.max_abs, batch_count=33376.666666666664, ans=10.0 2024-03-15 23:54:07,526 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.self_attn2.whiten, num_groups=1, num_channels=512, metric=12.47 vs. limit=22.5 2024-03-15 23:54:15,543 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.374e+01 9.351e+01 1.091e+02 1.464e+02 2.330e+02, threshold=2.182e+02, percent-clipped=0.0 2024-03-15 23:54:18,296 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward2.hidden_balancer.prob, batch_count=33410.0, ans=0.125 2024-03-15 23:54:19,609 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.out_combiner.scale_min, batch_count=33410.0, ans=0.2 2024-03-15 23:54:27,931 INFO [train_char.py:689] (0/2) Epoch 20, batch 400, loss[loss=0.0805, simple_loss=0.1391, pruned_loss=0.01097, over 24395.00 frames. ], tot_loss[loss=0.07756, simple_loss=0.1307, pruned_loss=0.01219, over 4182224.22 frames. ], batch size: 172, lr: 1.48e-02, grad_scale: 16.0 2024-03-15 23:54:43,712 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=33476.666666666664, ans=0.1 2024-03-15 23:55:00,450 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass.scale_min, batch_count=33510.0, ans=0.2 2024-03-15 23:55:02,919 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module1.balancer1.prob, batch_count=33510.0, ans=0.125 2024-03-15 23:55:06,743 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.bypass.skip_rate, batch_count=33510.0, ans=0.07 2024-03-15 23:55:06,782 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module2.balancer1.prob, batch_count=33510.0, ans=0.125 2024-03-15 23:55:06,784 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.balancer1.prob, batch_count=33510.0, ans=0.125 2024-03-15 23:55:18,828 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward2.hidden_balancer.prob, batch_count=33543.333333333336, ans=0.125 2024-03-15 23:55:27,052 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.whiten, num_groups=1, num_channels=384, metric=4.53 vs. limit=12.0 2024-03-15 23:55:33,912 INFO [train_char.py:689] (0/2) Epoch 20, batch 450, loss[loss=0.0929, simple_loss=0.1529, pruned_loss=0.01645, over 24110.00 frames. ], tot_loss[loss=0.07888, simple_loss=0.1329, pruned_loss=0.01246, over 4328677.00 frames. ], batch size: 223, lr: 1.48e-02, grad_scale: 16.0 2024-03-15 23:55:53,346 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer1.prob, batch_count=33643.333333333336, ans=0.125 2024-03-15 23:56:03,515 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.ff3_skip_rate, batch_count=33676.666666666664, ans=0.0035485507246376816 2024-03-15 23:56:25,713 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.181e+01 9.140e+01 1.097e+02 1.429e+02 2.929e+02, threshold=2.193e+02, percent-clipped=4.0 2024-03-15 23:56:28,741 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=23.51 vs. limit=15.0 2024-03-15 23:56:36,350 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer1.min_positive, batch_count=33743.333333333336, ans=0.025 2024-03-15 23:56:38,421 INFO [train_char.py:689] (0/2) Epoch 20, batch 500, loss[loss=0.08987, simple_loss=0.1496, pruned_loss=0.01506, over 24205.00 frames. ], tot_loss[loss=0.08, simple_loss=0.1348, pruned_loss=0.01261, over 4440279.91 frames. ], batch size: 224, lr: 1.48e-02, grad_scale: 16.0 2024-03-15 23:56:45,382 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass.scale_min, batch_count=33776.666666666664, ans=0.2 2024-03-15 23:56:48,089 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-20.pt 2024-03-15 23:57:40,511 INFO [train_char.py:689] (0/2) Epoch 21, batch 0, loss[loss=0.09375, simple_loss=0.1588, pruned_loss=0.01437, over 23789.00 frames. ], tot_loss[loss=0.09375, simple_loss=0.1588, pruned_loss=0.01437, over 23789.00 frames. ], batch size: 107, lr: 1.44e-02, grad_scale: 32.0 2024-03-15 23:57:40,511 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-15 23:57:54,057 INFO [train_char.py:721] (0/2) Epoch 21, validation: loss=0.06442, simple_loss=0.1177, pruned_loss=0.005578, over 657665.00 frames. 2024-03-15 23:57:54,062 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-15 23:58:10,306 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=33833.333333333336, ans=0.125 2024-03-15 23:58:21,330 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_skip_rate, batch_count=33866.666666666664, ans=0.0 2024-03-15 23:58:40,447 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass.skip_rate, batch_count=33900.0, ans=0.07 2024-03-15 23:59:01,946 INFO [train_char.py:689] (0/2) Epoch 21, batch 50, loss[loss=0.08843, simple_loss=0.151, pruned_loss=0.01293, over 24157.00 frames. ], tot_loss[loss=0.07342, simple_loss=0.1248, pruned_loss=0.01101, over 1085771.57 frames. ], batch size: 251, lr: 1.44e-02, grad_scale: 32.0 2024-03-15 23:59:07,633 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=33966.666666666664, ans=0.1 2024-03-15 23:59:07,640 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer1.prob, batch_count=33966.666666666664, ans=0.125 2024-03-15 23:59:30,861 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer2.prob, batch_count=34000.0, ans=0.125 2024-03-15 23:59:49,961 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer2.min_positive, batch_count=34066.666666666664, ans=0.05 2024-03-15 23:59:51,345 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=34066.666666666664, ans=0.1 2024-03-15 23:59:54,818 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.798e+01 9.597e+01 1.137e+02 1.446e+02 3.282e+02, threshold=2.274e+02, percent-clipped=6.0 2024-03-16 00:00:15,256 INFO [train_char.py:689] (0/2) Epoch 21, batch 100, loss[loss=0.0684, simple_loss=0.1185, pruned_loss=0.009164, over 24268.00 frames. ], tot_loss[loss=0.07533, simple_loss=0.1278, pruned_loss=0.01141, over 1913407.26 frames. ], batch size: 116, lr: 1.44e-02, grad_scale: 16.0 2024-03-16 00:00:28,630 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 00:00:47,259 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=14.33 vs. limit=15.0 2024-03-16 00:00:58,401 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 00:00:58,492 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module2.balancer2.prob, batch_count=34233.333333333336, ans=0.125 2024-03-16 00:01:02,186 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=34233.333333333336, ans=0.1 2024-03-16 00:01:04,873 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_skip_rate, batch_count=34233.333333333336, ans=0.0 2024-03-16 00:01:17,860 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer2.min_abs, batch_count=34266.666666666664, ans=0.5 2024-03-16 00:01:20,141 INFO [train_char.py:689] (0/2) Epoch 21, batch 150, loss[loss=0.07314, simple_loss=0.125, pruned_loss=0.01063, over 24155.00 frames. ], tot_loss[loss=0.07503, simple_loss=0.1274, pruned_loss=0.01133, over 2552561.65 frames. ], batch size: 344, lr: 1.43e-02, grad_scale: 16.0 2024-03-16 00:01:49,487 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=11.97 vs. limit=15.0 2024-03-16 00:01:50,158 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.ff3_skip_rate, batch_count=34366.666666666664, ans=0.0033985507246376825 2024-03-16 00:02:04,024 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.613e+01 9.775e+01 1.277e+02 1.595e+02 3.121e+02, threshold=2.554e+02, percent-clipped=7.0 2024-03-16 00:02:10,128 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=512, metric=6.22 vs. limit=15.0 2024-03-16 00:02:12,268 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer1.prob, batch_count=34433.333333333336, ans=0.125 2024-03-16 00:02:14,685 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.hidden_balancer.prob, batch_count=34433.333333333336, ans=0.125 2024-03-16 00:02:16,065 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module2.balancer2.min_positive, batch_count=34433.333333333336, ans=0.05 2024-03-16 00:02:25,648 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass.scale_min, batch_count=34433.333333333336, ans=0.2 2024-03-16 00:02:28,875 INFO [train_char.py:689] (0/2) Epoch 21, batch 200, loss[loss=0.0923, simple_loss=0.1552, pruned_loss=0.01471, over 24034.00 frames. ], tot_loss[loss=0.0755, simple_loss=0.1285, pruned_loss=0.01127, over 3047878.74 frames. ], batch size: 236, lr: 1.43e-02, grad_scale: 16.0 2024-03-16 00:03:12,388 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.balancer1.prob, batch_count=34566.666666666664, ans=0.125 2024-03-16 00:03:12,651 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=13.09 vs. limit=22.5 2024-03-16 00:03:36,394 INFO [train_char.py:689] (0/2) Epoch 21, batch 250, loss[loss=0.08725, simple_loss=0.1459, pruned_loss=0.01428, over 24146.00 frames. ], tot_loss[loss=0.07608, simple_loss=0.1292, pruned_loss=0.01147, over 3439517.68 frames. ], batch size: 279, lr: 1.43e-02, grad_scale: 16.0 2024-03-16 00:03:36,682 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module1.balancer1.prob, batch_count=34633.333333333336, ans=0.125 2024-03-16 00:03:49,820 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff2_skip_rate, batch_count=34666.666666666664, ans=0.003333333333333334 2024-03-16 00:04:20,356 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.173e+01 9.619e+01 1.155e+02 1.493e+02 2.796e+02, threshold=2.311e+02, percent-clipped=1.0 2024-03-16 00:04:27,104 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module1.balancer2.prob, batch_count=34766.666666666664, ans=0.125 2024-03-16 00:04:34,045 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=256, metric=10.26 vs. limit=15.0 2024-03-16 00:04:40,729 INFO [train_char.py:689] (0/2) Epoch 21, batch 300, loss[loss=0.0642, simple_loss=0.1162, pruned_loss=0.006078, over 24312.00 frames. ], tot_loss[loss=0.07587, simple_loss=0.1289, pruned_loss=0.01144, over 3747139.87 frames. ], batch size: 146, lr: 1.43e-02, grad_scale: 16.0 2024-03-16 00:05:13,310 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.nonlin_attention.balancer.prob, batch_count=34866.666666666664, ans=0.125 2024-03-16 00:05:22,616 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=256, metric=8.78 vs. limit=15.0 2024-03-16 00:05:23,661 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=34900.0, ans=0.1 2024-03-16 00:05:27,597 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=34900.0, ans=0.1 2024-03-16 00:05:34,678 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.whiten.whitening_limit, batch_count=34900.0, ans=15.0 2024-03-16 00:05:49,469 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.bypass.scale_min, batch_count=34966.666666666664, ans=0.2 2024-03-16 00:05:50,486 INFO [train_char.py:689] (0/2) Epoch 21, batch 350, loss[loss=0.0642, simple_loss=0.1145, pruned_loss=0.006968, over 24298.00 frames. ], tot_loss[loss=0.0763, simple_loss=0.1298, pruned_loss=0.01137, over 3988727.86 frames. ], batch size: 140, lr: 1.42e-02, grad_scale: 16.0 2024-03-16 00:06:00,866 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.attention_skip_rate, batch_count=34966.666666666664, ans=0.0 2024-03-16 00:06:33,005 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.808e+01 1.023e+02 1.285e+02 1.704e+02 2.901e+02, threshold=2.569e+02, percent-clipped=6.0 2024-03-16 00:06:56,256 INFO [train_char.py:689] (0/2) Epoch 21, batch 400, loss[loss=0.07584, simple_loss=0.1302, pruned_loss=0.01072, over 24381.00 frames. ], tot_loss[loss=0.07673, simple_loss=0.1305, pruned_loss=0.01149, over 4178062.08 frames. ], batch size: 172, lr: 1.42e-02, grad_scale: 32.0 2024-03-16 00:07:01,427 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.ff3_skip_rate, batch_count=35133.333333333336, ans=0.0032318840579710142 2024-03-16 00:07:10,194 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.self_attn_weights.pos_emb_skip_rate, batch_count=35166.666666666664, ans=0.0 2024-03-16 00:07:25,308 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.balancer2.prob, batch_count=35200.0, ans=0.125 2024-03-16 00:07:25,376 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=35200.0, ans=0.1 2024-03-16 00:07:44,380 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.out_combiner.scale_min, batch_count=35233.333333333336, ans=0.2 2024-03-16 00:08:01,179 INFO [train_char.py:689] (0/2) Epoch 21, batch 450, loss[loss=0.08297, simple_loss=0.1444, pruned_loss=0.01074, over 24052.00 frames. ], tot_loss[loss=0.07778, simple_loss=0.1321, pruned_loss=0.01173, over 4324938.33 frames. ], batch size: 199, lr: 1.42e-02, grad_scale: 32.0 2024-03-16 00:08:01,451 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff2_skip_rate, batch_count=35300.0, ans=0.003195652173913044 2024-03-16 00:08:01,960 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=8.07 vs. limit=10.0 2024-03-16 00:08:20,725 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=35333.333333333336, ans=0.125 2024-03-16 00:08:25,836 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.attention_skip_rate, batch_count=35366.666666666664, ans=0.0 2024-03-16 00:08:27,000 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer2.prob, batch_count=35366.666666666664, ans=0.125 2024-03-16 00:08:28,778 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=512, metric=13.62 vs. limit=22.5 2024-03-16 00:08:44,290 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.539e+01 9.378e+01 1.043e+02 1.385e+02 2.513e+02, threshold=2.085e+02, percent-clipped=0.0 2024-03-16 00:08:47,785 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=7.78 vs. limit=15.0 2024-03-16 00:09:05,572 INFO [train_char.py:689] (0/2) Epoch 21, batch 500, loss[loss=0.0868, simple_loss=0.1469, pruned_loss=0.01335, over 24103.00 frames. ], tot_loss[loss=0.0788, simple_loss=0.134, pruned_loss=0.01178, over 4437784.42 frames. ], batch size: 199, lr: 1.42e-02, grad_scale: 32.0 2024-03-16 00:09:14,558 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-21.pt 2024-03-16 00:10:07,208 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.whiten, num_groups=1, num_channels=512, metric=11.01 vs. limit=12.0 2024-03-16 00:10:07,685 INFO [train_char.py:689] (0/2) Epoch 22, batch 0, loss[loss=0.06839, simple_loss=0.1175, pruned_loss=0.009647, over 24160.00 frames. ], tot_loss[loss=0.06839, simple_loss=0.1175, pruned_loss=0.009647, over 24160.00 frames. ], batch size: 344, lr: 1.38e-02, grad_scale: 32.0 2024-03-16 00:10:07,686 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 00:10:21,688 INFO [train_char.py:721] (0/2) Epoch 22, validation: loss=0.06405, simple_loss=0.1169, pruned_loss=0.005585, over 657665.00 frames. 2024-03-16 00:10:21,688 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 00:10:25,971 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_skip_rate, batch_count=35490.0, ans=0.0 2024-03-16 00:10:30,096 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module2.balancer1.prob, batch_count=35490.0, ans=0.125 2024-03-16 00:10:34,290 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.ff2_skip_rate, batch_count=35523.333333333336, ans=0.003147101449275362 2024-03-16 00:10:50,465 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer2.min_positive, batch_count=35556.666666666664, ans=0.05 2024-03-16 00:11:04,945 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.attention_skip_rate, batch_count=35590.0, ans=0.0 2024-03-16 00:11:06,303 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=35590.0, ans=0.1 2024-03-16 00:11:13,282 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass_mid.scale_min, batch_count=35590.0, ans=0.2 2024-03-16 00:11:24,127 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.conv_module2.whiten, num_groups=1, num_channels=512, metric=5.67 vs. limit=15.0 2024-03-16 00:11:33,093 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.conv_module2.whiten, num_groups=1, num_channels=512, metric=6.35 vs. limit=15.0 2024-03-16 00:11:33,677 INFO [train_char.py:689] (0/2) Epoch 22, batch 50, loss[loss=0.07538, simple_loss=0.1273, pruned_loss=0.01172, over 23792.00 frames. ], tot_loss[loss=0.07391, simple_loss=0.1258, pruned_loss=0.01101, over 1086194.96 frames. ], batch size: 107, lr: 1.38e-02, grad_scale: 32.0 2024-03-16 00:11:47,628 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=35690.0, ans=0.125 2024-03-16 00:12:09,627 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer2.prob, batch_count=35723.333333333336, ans=0.125 2024-03-16 00:12:10,726 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.629e+01 9.247e+01 1.091e+02 1.260e+02 3.544e+02, threshold=2.182e+02, percent-clipped=4.0 2024-03-16 00:12:21,198 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=35756.666666666664, ans=0.125 2024-03-16 00:12:39,195 INFO [train_char.py:689] (0/2) Epoch 22, batch 100, loss[loss=0.0743, simple_loss=0.1242, pruned_loss=0.01222, over 24122.00 frames. ], tot_loss[loss=0.07323, simple_loss=0.1254, pruned_loss=0.01052, over 1913445.33 frames. ], batch size: 362, lr: 1.38e-02, grad_scale: 16.0 2024-03-16 00:12:39,999 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=14.52 vs. limit=15.0 2024-03-16 00:12:42,565 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.whiten, num_groups=1, num_channels=384, metric=7.23 vs. limit=12.0 2024-03-16 00:12:44,765 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module2.balancer2.prob, batch_count=35823.333333333336, ans=0.125 2024-03-16 00:12:56,284 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.prob, batch_count=35856.666666666664, ans=0.125 2024-03-16 00:13:01,273 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=35856.666666666664, ans=0.125 2024-03-16 00:13:09,005 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.balancer_na.min_abs, batch_count=35890.0, ans=0.02 2024-03-16 00:13:35,409 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=35956.666666666664, ans=0.1 2024-03-16 00:13:35,523 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=35956.666666666664, ans=0.125 2024-03-16 00:13:41,988 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.out_combiner.scale_min, batch_count=35990.0, ans=0.2 2024-03-16 00:13:43,048 INFO [train_char.py:689] (0/2) Epoch 22, batch 150, loss[loss=0.07857, simple_loss=0.1282, pruned_loss=0.01445, over 21461.00 frames. ], tot_loss[loss=0.07394, simple_loss=0.1267, pruned_loss=0.01059, over 2554650.67 frames. ], batch size: 85, lr: 1.38e-02, grad_scale: 16.0 2024-03-16 00:14:26,678 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.506e+01 9.508e+01 1.078e+02 1.395e+02 3.027e+02, threshold=2.155e+02, percent-clipped=6.0 2024-03-16 00:14:30,836 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.hidden_balancer.prob, batch_count=36090.0, ans=0.125 2024-03-16 00:14:55,158 INFO [train_char.py:689] (0/2) Epoch 22, batch 200, loss[loss=0.08713, simple_loss=0.1503, pruned_loss=0.01199, over 24082.00 frames. ], tot_loss[loss=0.07434, simple_loss=0.1276, pruned_loss=0.01055, over 3053917.70 frames. ], batch size: 199, lr: 1.38e-02, grad_scale: 16.0 2024-03-16 00:15:28,405 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=36223.333333333336, ans=0.1 2024-03-16 00:15:30,112 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=10.70 vs. limit=15.0 2024-03-16 00:15:30,937 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.3.self_attn_weights, loss-sum=0.000e+00 2024-03-16 00:15:33,560 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward3.hidden_balancer.prob, batch_count=36256.666666666664, ans=0.125 2024-03-16 00:15:39,645 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=36256.666666666664, ans=0.1 2024-03-16 00:15:52,386 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=36290.0, ans=0.1 2024-03-16 00:15:58,747 INFO [train_char.py:689] (0/2) Epoch 22, batch 250, loss[loss=0.09604, simple_loss=0.1562, pruned_loss=0.01793, over 24114.00 frames. ], tot_loss[loss=0.07492, simple_loss=0.1281, pruned_loss=0.01086, over 3440636.54 frames. ], batch size: 251, lr: 1.37e-02, grad_scale: 8.0 2024-03-16 00:15:59,524 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=13.57 vs. limit=15.0 2024-03-16 00:16:19,886 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=384, metric=22.79 vs. limit=22.5 2024-03-16 00:16:26,862 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.nonlin_attention.balancer.prob, batch_count=36390.0, ans=0.125 2024-03-16 00:16:35,850 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.385e+01 9.496e+01 1.113e+02 1.323e+02 2.572e+02, threshold=2.227e+02, percent-clipped=2.0 2024-03-16 00:16:43,118 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module2.balancer1.prob, batch_count=36423.333333333336, ans=0.125 2024-03-16 00:16:53,297 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.attention_skip_rate, batch_count=36456.666666666664, ans=0.0 2024-03-16 00:17:06,964 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.scale_min, batch_count=36456.666666666664, ans=0.2 2024-03-16 00:17:09,062 INFO [train_char.py:689] (0/2) Epoch 22, batch 300, loss[loss=0.0612, simple_loss=0.1136, pruned_loss=0.004417, over 24258.00 frames. ], tot_loss[loss=0.07494, simple_loss=0.1282, pruned_loss=0.01085, over 3748714.91 frames. ], batch size: 140, lr: 1.37e-02, grad_scale: 8.0 2024-03-16 00:17:30,716 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.ff3_skip_rate, batch_count=36523.333333333336, ans=0.0029297101449275364 2024-03-16 00:17:33,252 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=36556.666666666664, ans=0.1 2024-03-16 00:17:33,325 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff2_skip_rate, batch_count=36556.666666666664, ans=0.0029224637681159425 2024-03-16 00:17:38,315 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=36556.666666666664, ans=0.0 2024-03-16 00:17:43,139 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer1.prob, batch_count=36556.666666666664, ans=0.125 2024-03-16 00:17:59,965 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=12.11 vs. limit=15.0 2024-03-16 00:18:11,128 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass_mid.scale_min, batch_count=36656.666666666664, ans=0.2 2024-03-16 00:18:12,029 INFO [train_char.py:689] (0/2) Epoch 22, batch 350, loss[loss=0.08881, simple_loss=0.1519, pruned_loss=0.01284, over 24081.00 frames. ], tot_loss[loss=0.07562, simple_loss=0.1294, pruned_loss=0.01094, over 3987204.98 frames. ], batch size: 236, lr: 1.37e-02, grad_scale: 8.0 2024-03-16 00:18:12,305 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff3_skip_rate, batch_count=36656.666666666664, ans=0.00290072463768116 2024-03-16 00:18:13,672 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass_mid.scale_min, batch_count=36656.666666666664, ans=0.2 2024-03-16 00:18:26,388 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=36690.0, ans=0.125 2024-03-16 00:18:28,838 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer2.prob, batch_count=36690.0, ans=0.125 2024-03-16 00:18:31,987 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=13.29 vs. limit=15.0 2024-03-16 00:18:32,648 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=36690.0, ans=0.1 2024-03-16 00:18:37,659 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=512, metric=5.12 vs. limit=15.0 2024-03-16 00:18:53,111 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.793e+01 9.479e+01 1.096e+02 1.512e+02 2.433e+02, threshold=2.193e+02, percent-clipped=6.0 2024-03-16 00:19:03,639 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=36756.666666666664, ans=0.1 2024-03-16 00:19:14,968 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.out_combiner.scale_min, batch_count=36790.0, ans=0.2 2024-03-16 00:19:19,690 INFO [train_char.py:689] (0/2) Epoch 22, batch 400, loss[loss=0.07711, simple_loss=0.1299, pruned_loss=0.01216, over 24441.00 frames. ], tot_loss[loss=0.07625, simple_loss=0.1303, pruned_loss=0.01112, over 4177032.01 frames. ], batch size: 165, lr: 1.37e-02, grad_scale: 16.0 2024-03-16 00:19:22,557 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass_mid.scale_min, batch_count=36823.333333333336, ans=0.2 2024-03-16 00:20:00,976 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.out_combiner.scale_min, batch_count=36923.333333333336, ans=0.2 2024-03-16 00:20:10,935 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward3.hidden_balancer.prob, batch_count=36923.333333333336, ans=0.125 2024-03-16 00:20:25,853 INFO [train_char.py:689] (0/2) Epoch 22, batch 450, loss[loss=0.08774, simple_loss=0.1464, pruned_loss=0.01454, over 24122.00 frames. ], tot_loss[loss=0.07703, simple_loss=0.1313, pruned_loss=0.01139, over 4323050.90 frames. ], batch size: 279, lr: 1.37e-02, grad_scale: 16.0 2024-03-16 00:20:26,031 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff3_skip_rate, batch_count=36990.0, ans=0.0028282608695652167 2024-03-16 00:20:35,846 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 00:20:48,543 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.balancer.max_positive, batch_count=37023.333333333336, ans=0.95 2024-03-16 00:20:51,132 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer2.prob, batch_count=37056.666666666664, ans=0.125 2024-03-16 00:21:02,541 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.162e+01 9.120e+01 1.035e+02 1.225e+02 2.594e+02, threshold=2.069e+02, percent-clipped=6.0 2024-03-16 00:21:06,518 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=37090.0, ans=0.1 2024-03-16 00:21:29,365 INFO [train_char.py:689] (0/2) Epoch 22, batch 500, loss[loss=0.08318, simple_loss=0.1431, pruned_loss=0.01165, over 24114.00 frames. ], tot_loss[loss=0.07843, simple_loss=0.1338, pruned_loss=0.01153, over 4435830.87 frames. ], batch size: 223, lr: 1.36e-02, grad_scale: 16.0 2024-03-16 00:21:33,245 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=37156.666666666664, ans=0.125 2024-03-16 00:21:33,675 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=6.40 vs. limit=10.0 2024-03-16 00:21:38,546 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-22.pt 2024-03-16 00:22:27,658 INFO [train_char.py:689] (0/2) Epoch 23, batch 0, loss[loss=0.1043, simple_loss=0.1739, pruned_loss=0.01736, over 21964.00 frames. ], tot_loss[loss=0.1043, simple_loss=0.1739, pruned_loss=0.01736, over 21964.00 frames. ], batch size: 86, lr: 1.33e-02, grad_scale: 32.0 2024-03-16 00:22:27,658 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 00:22:41,531 INFO [train_char.py:721] (0/2) Epoch 23, validation: loss=0.06381, simple_loss=0.1166, pruned_loss=0.005493, over 657665.00 frames. 2024-03-16 00:22:41,532 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 00:22:44,751 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer1.prob, batch_count=37180.0, ans=0.125 2024-03-16 00:23:34,048 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=37280.0, ans=0.125 2024-03-16 00:23:40,570 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 00:23:57,124 INFO [train_char.py:689] (0/2) Epoch 23, batch 50, loss[loss=0.08415, simple_loss=0.1479, pruned_loss=0.0102, over 24087.00 frames. ], tot_loss[loss=0.07051, simple_loss=0.1213, pruned_loss=0.009858, over 1082624.96 frames. ], batch size: 199, lr: 1.33e-02, grad_scale: 32.0 2024-03-16 00:24:01,590 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.nonlin_attention.balancer.prob, batch_count=37346.666666666664, ans=0.125 2024-03-16 00:24:14,957 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward2.hidden_balancer.prob, batch_count=37380.0, ans=0.125 2024-03-16 00:24:18,989 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=37380.0, ans=0.1 2024-03-16 00:24:21,503 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.nonlin_attention.balancer.prob, batch_count=37380.0, ans=0.125 2024-03-16 00:24:26,278 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.505e+01 8.832e+01 1.099e+02 1.359e+02 2.976e+02, threshold=2.198e+02, percent-clipped=5.0 2024-03-16 00:25:02,316 INFO [train_char.py:689] (0/2) Epoch 23, batch 100, loss[loss=0.08135, simple_loss=0.1413, pruned_loss=0.01068, over 24147.00 frames. ], tot_loss[loss=0.07205, simple_loss=0.124, pruned_loss=0.01006, over 1911871.84 frames. ], batch size: 279, lr: 1.33e-02, grad_scale: 32.0 2024-03-16 00:25:06,390 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer1.min_positive, batch_count=37513.333333333336, ans=0.025 2024-03-16 00:25:07,546 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 00:25:20,288 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff2_skip_rate, batch_count=37546.666666666664, ans=0.002707246376811595 2024-03-16 00:25:21,703 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.balancer1.prob, batch_count=37546.666666666664, ans=0.125 2024-03-16 00:26:08,507 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.balancer1.prob, batch_count=37646.666666666664, ans=0.125 2024-03-16 00:26:13,731 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=37646.666666666664, ans=0.125 2024-03-16 00:26:15,998 INFO [train_char.py:689] (0/2) Epoch 23, batch 150, loss[loss=0.07909, simple_loss=0.1346, pruned_loss=0.01177, over 20570.00 frames. ], tot_loss[loss=0.07234, simple_loss=0.1245, pruned_loss=0.01008, over 2550878.37 frames. ], batch size: 82, lr: 1.33e-02, grad_scale: 32.0 2024-03-16 00:26:26,559 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.max_abs, batch_count=37680.0, ans=10.0 2024-03-16 00:26:32,903 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer1.max_abs, batch_count=37713.333333333336, ans=10.0 2024-03-16 00:26:44,399 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.776e+01 9.534e+01 1.222e+02 1.511e+02 3.389e+02, threshold=2.445e+02, percent-clipped=7.0 2024-03-16 00:27:20,178 INFO [train_char.py:689] (0/2) Epoch 23, batch 200, loss[loss=0.07865, simple_loss=0.1411, pruned_loss=0.008096, over 24117.00 frames. ], tot_loss[loss=0.07291, simple_loss=0.1259, pruned_loss=0.009933, over 3050222.02 frames. ], batch size: 199, lr: 1.32e-02, grad_scale: 32.0 2024-03-16 00:27:29,268 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.attention_skip_rate, batch_count=37846.666666666664, ans=0.0 2024-03-16 00:27:29,864 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=7.53 vs. limit=10.0 2024-03-16 00:27:31,925 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.attention_skip_rate, batch_count=37880.0, ans=0.0 2024-03-16 00:27:57,050 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=37946.666666666664, ans=0.125 2024-03-16 00:28:04,659 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.prob, batch_count=37946.666666666664, ans=0.125 2024-03-16 00:28:23,925 INFO [train_char.py:689] (0/2) Epoch 23, batch 250, loss[loss=0.06697, simple_loss=0.1186, pruned_loss=0.007653, over 24395.00 frames. ], tot_loss[loss=0.07342, simple_loss=0.1269, pruned_loss=0.009981, over 3441845.11 frames. ], batch size: 135, lr: 1.32e-02, grad_scale: 32.0 2024-03-16 00:28:58,982 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.329e+01 1.019e+02 1.304e+02 1.708e+02 3.266e+02, threshold=2.609e+02, percent-clipped=5.0 2024-03-16 00:29:12,007 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_skip_rate, batch_count=38113.333333333336, ans=0.0 2024-03-16 00:29:21,422 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=10.81 vs. limit=15.0 2024-03-16 00:29:22,100 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_skip_rate, batch_count=38146.666666666664, ans=0.0 2024-03-16 00:29:24,739 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.attention_skip_rate, batch_count=38146.666666666664, ans=0.0 2024-03-16 00:29:34,763 INFO [train_char.py:689] (0/2) Epoch 23, batch 300, loss[loss=0.06588, simple_loss=0.12, pruned_loss=0.005908, over 24442.00 frames. ], tot_loss[loss=0.07369, simple_loss=0.1275, pruned_loss=0.009945, over 3749550.12 frames. ], batch size: 165, lr: 1.32e-02, grad_scale: 32.0 2024-03-16 00:29:41,407 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=38180.0, ans=0.125 2024-03-16 00:29:59,500 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=512, metric=12.77 vs. limit=15.0 2024-03-16 00:30:04,161 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.scale_min, batch_count=38246.666666666664, ans=0.2 2024-03-16 00:30:19,374 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer2.prob, batch_count=38280.0, ans=0.125 2024-03-16 00:30:37,487 INFO [train_char.py:689] (0/2) Epoch 23, batch 350, loss[loss=0.06438, simple_loss=0.1107, pruned_loss=0.009042, over 24263.00 frames. ], tot_loss[loss=0.07431, simple_loss=0.1283, pruned_loss=0.01017, over 3987318.61 frames. ], batch size: 134, lr: 1.32e-02, grad_scale: 32.0 2024-03-16 00:30:40,289 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.balancer2.prob, batch_count=38346.666666666664, ans=0.125 2024-03-16 00:31:00,036 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_skip_rate, batch_count=38380.0, ans=0.0 2024-03-16 00:31:08,729 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward2.hidden_balancer.prob, batch_count=38413.333333333336, ans=0.125 2024-03-16 00:31:09,688 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.909e+01 9.004e+01 1.052e+02 1.311e+02 2.813e+02, threshold=2.104e+02, percent-clipped=1.0 2024-03-16 00:31:29,074 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer2.prob, batch_count=38446.666666666664, ans=0.125 2024-03-16 00:31:30,998 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=144, metric=8.53 vs. limit=10.0 2024-03-16 00:31:31,535 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.attention_skip_rate, batch_count=38480.0, ans=0.0 2024-03-16 00:31:45,102 INFO [train_char.py:689] (0/2) Epoch 23, batch 400, loss[loss=0.07886, simple_loss=0.1363, pruned_loss=0.01073, over 24091.00 frames. ], tot_loss[loss=0.07455, simple_loss=0.1286, pruned_loss=0.01028, over 4176581.86 frames. ], batch size: 188, lr: 1.32e-02, grad_scale: 32.0 2024-03-16 00:31:53,920 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff2_skip_rate, batch_count=38513.333333333336, ans=0.0024971014492753615 2024-03-16 00:31:56,421 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.balancer1.prob, batch_count=38546.666666666664, ans=0.125 2024-03-16 00:32:11,767 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=38580.0, ans=0.125 2024-03-16 00:32:13,059 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass_mid.scale_min, batch_count=38580.0, ans=0.2 2024-03-16 00:32:41,075 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.attention_skip_rate, batch_count=38646.666666666664, ans=0.0 2024-03-16 00:32:51,178 INFO [train_char.py:689] (0/2) Epoch 23, batch 450, loss[loss=0.08484, simple_loss=0.146, pruned_loss=0.01184, over 24059.00 frames. ], tot_loss[loss=0.0749, simple_loss=0.1293, pruned_loss=0.01027, over 4324052.41 frames. ], batch size: 236, lr: 1.31e-02, grad_scale: 32.0 2024-03-16 00:32:51,451 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff2_skip_rate, batch_count=38680.0, ans=0.002460869565217391 2024-03-16 00:32:52,721 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=38680.0, ans=0.125 2024-03-16 00:32:53,226 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=384, metric=18.93 vs. limit=22.5 2024-03-16 00:33:18,186 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.attention_skip_rate, batch_count=38746.666666666664, ans=0.0 2024-03-16 00:33:20,463 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.094e+01 9.392e+01 1.088e+02 1.385e+02 2.141e+02, threshold=2.177e+02, percent-clipped=1.0 2024-03-16 00:33:20,714 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=38746.666666666664, ans=0.125 2024-03-16 00:33:45,356 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer1.prob, batch_count=38813.333333333336, ans=0.125 2024-03-16 00:33:55,193 INFO [train_char.py:689] (0/2) Epoch 23, batch 500, loss[loss=0.08175, simple_loss=0.1432, pruned_loss=0.01013, over 24043.00 frames. ], tot_loss[loss=0.07641, simple_loss=0.1317, pruned_loss=0.01054, over 4436638.57 frames. ], batch size: 199, lr: 1.31e-02, grad_scale: 32.0 2024-03-16 00:34:04,453 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-23.pt 2024-03-16 00:34:55,266 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=19.83 vs. limit=22.5 2024-03-16 00:34:55,668 INFO [train_char.py:689] (0/2) Epoch 24, batch 0, loss[loss=0.07505, simple_loss=0.133, pruned_loss=0.008561, over 21467.00 frames. ], tot_loss[loss=0.07505, simple_loss=0.133, pruned_loss=0.008561, over 21467.00 frames. ], batch size: 85, lr: 1.28e-02, grad_scale: 32.0 2024-03-16 00:34:55,669 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 00:35:09,605 INFO [train_char.py:721] (0/2) Epoch 24, validation: loss=0.06293, simple_loss=0.1149, pruned_loss=0.00547, over 657665.00 frames. 2024-03-16 00:35:09,606 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 00:35:31,084 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=38903.333333333336, ans=0.125 2024-03-16 00:35:34,097 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward2.out_whiten.whitening_limit, batch_count=38903.333333333336, ans=15.0 2024-03-16 00:35:34,212 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=15.42 vs. limit=15.0 2024-03-16 00:35:37,702 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.skip_rate, batch_count=38936.666666666664, ans=0.04949747468305833 2024-03-16 00:35:43,426 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=384, metric=7.83 vs. limit=15.0 2024-03-16 00:36:00,556 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_skip_rate, batch_count=38970.0, ans=0.0 2024-03-16 00:36:05,820 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.ff2_skip_rate, batch_count=39003.333333333336, ans=0.0023905797101449276 2024-03-16 00:36:10,970 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer1.prob, batch_count=39003.333333333336, ans=0.125 2024-03-16 00:36:18,596 INFO [train_char.py:689] (0/2) Epoch 24, batch 50, loss[loss=0.07531, simple_loss=0.1275, pruned_loss=0.01157, over 24181.00 frames. ], tot_loss[loss=0.07145, simple_loss=0.1243, pruned_loss=0.009314, over 1081808.71 frames. ], batch size: 311, lr: 1.28e-02, grad_scale: 32.0 2024-03-16 00:36:37,855 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.ff3_skip_rate, batch_count=39070.0, ans=0.0023760869565217398 2024-03-16 00:36:38,747 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.636e+01 9.591e+01 1.123e+02 1.366e+02 2.605e+02, threshold=2.246e+02, percent-clipped=2.0 2024-03-16 00:36:40,563 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 00:37:19,784 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer1.min_positive, batch_count=39136.666666666664, ans=0.025 2024-03-16 00:37:35,097 INFO [train_char.py:689] (0/2) Epoch 24, batch 100, loss[loss=0.08703, simple_loss=0.1489, pruned_loss=0.01257, over 24143.00 frames. ], tot_loss[loss=0.07227, simple_loss=0.1259, pruned_loss=0.009316, over 1912691.16 frames. ], batch size: 279, lr: 1.28e-02, grad_scale: 32.0 2024-03-16 00:37:44,262 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer1.prob, batch_count=39203.333333333336, ans=0.125 2024-03-16 00:37:52,490 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=9.48 vs. limit=15.0 2024-03-16 00:38:23,142 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=39303.333333333336, ans=0.1 2024-03-16 00:38:40,223 INFO [train_char.py:689] (0/2) Epoch 24, batch 150, loss[loss=0.06148, simple_loss=0.1109, pruned_loss=0.006044, over 24405.00 frames. ], tot_loss[loss=0.07254, simple_loss=0.1264, pruned_loss=0.009354, over 2554763.14 frames. ], batch size: 129, lr: 1.28e-02, grad_scale: 32.0 2024-03-16 00:38:40,929 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=512, metric=4.21 vs. limit=15.0 2024-03-16 00:38:42,883 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.nonlin_attention.balancer.prob, batch_count=39370.0, ans=0.125 2024-03-16 00:38:59,616 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.918e+01 9.440e+01 1.166e+02 1.727e+02 3.008e+02, threshold=2.332e+02, percent-clipped=10.0 2024-03-16 00:39:18,299 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=12.93 vs. limit=15.0 2024-03-16 00:39:19,042 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=39470.0, ans=0.125 2024-03-16 00:39:26,806 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=39470.0, ans=0.125 2024-03-16 00:39:26,906 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.bypass.scale_min, batch_count=39470.0, ans=0.2 2024-03-16 00:39:27,453 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=6.40 vs. limit=10.0 2024-03-16 00:39:40,022 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=39503.333333333336, ans=0.1 2024-03-16 00:39:44,691 INFO [train_char.py:689] (0/2) Epoch 24, batch 200, loss[loss=0.07304, simple_loss=0.121, pruned_loss=0.01255, over 24165.00 frames. ], tot_loss[loss=0.07295, simple_loss=0.1266, pruned_loss=0.009638, over 3054092.60 frames. ], batch size: 344, lr: 1.28e-02, grad_scale: 32.0 2024-03-16 00:40:42,276 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=12.91 vs. limit=15.0 2024-03-16 00:40:56,976 INFO [train_char.py:689] (0/2) Epoch 24, batch 250, loss[loss=0.08416, simple_loss=0.1482, pruned_loss=0.01006, over 24198.00 frames. ], tot_loss[loss=0.0728, simple_loss=0.1262, pruned_loss=0.009707, over 3439090.47 frames. ], batch size: 212, lr: 1.27e-02, grad_scale: 32.0 2024-03-16 00:41:07,934 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=512, metric=5.08 vs. limit=15.0 2024-03-16 00:41:15,993 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.764e+01 9.455e+01 1.157e+02 1.660e+02 2.705e+02, threshold=2.315e+02, percent-clipped=5.0 2024-03-16 00:41:30,641 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=39770.0, ans=0.125 2024-03-16 00:42:00,747 INFO [train_char.py:689] (0/2) Epoch 24, batch 300, loss[loss=0.07729, simple_loss=0.1345, pruned_loss=0.01002, over 24204.00 frames. ], tot_loss[loss=0.07333, simple_loss=0.127, pruned_loss=0.009825, over 3746970.82 frames. ], batch size: 311, lr: 1.27e-02, grad_scale: 32.0 2024-03-16 00:42:28,128 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=39936.666666666664, ans=0.1 2024-03-16 00:42:45,538 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff3_skip_rate, batch_count=39970.0, ans=0.002180434782608697 2024-03-16 00:42:51,726 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer1.prob, batch_count=39970.0, ans=0.125 2024-03-16 00:42:54,459 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/checkpoint-12000.pt 2024-03-16 00:42:58,702 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_skip_rate, batch_count=40003.333333333336, ans=0.0 2024-03-16 00:43:02,338 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_skip_rate, batch_count=40003.333333333336, ans=0.0 2024-03-16 00:43:03,976 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=512, metric=15.04 vs. limit=22.5 2024-03-16 00:43:04,736 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.balancer2.prob, batch_count=40003.333333333336, ans=0.125 2024-03-16 00:43:06,054 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.balancer1.prob, batch_count=40003.333333333336, ans=0.125 2024-03-16 00:43:11,851 INFO [train_char.py:689] (0/2) Epoch 24, batch 350, loss[loss=0.06391, simple_loss=0.1066, pruned_loss=0.01059, over 23961.00 frames. ], tot_loss[loss=0.07253, simple_loss=0.1258, pruned_loss=0.009632, over 3990179.70 frames. ], batch size: 407, lr: 1.27e-02, grad_scale: 32.0 2024-03-16 00:43:15,859 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=40036.666666666664, ans=0.125 2024-03-16 00:43:28,476 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=40070.0, ans=0.1 2024-03-16 00:43:30,645 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.130e+01 9.112e+01 1.091e+02 1.543e+02 3.899e+02, threshold=2.182e+02, percent-clipped=9.0 2024-03-16 00:43:35,831 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=40103.333333333336, ans=0.125 2024-03-16 00:43:40,959 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer1.prob, batch_count=40103.333333333336, ans=0.125 2024-03-16 00:43:52,271 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.bypass_mid.scale_min, batch_count=40136.666666666664, ans=0.2 2024-03-16 00:43:59,889 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.balancer2.prob, batch_count=40136.666666666664, ans=0.125 2024-03-16 00:44:10,699 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.whiten, num_groups=1, num_channels=192, metric=4.29 vs. limit=12.0 2024-03-16 00:44:14,528 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=192, metric=9.94 vs. limit=15.0 2024-03-16 00:44:14,805 INFO [train_char.py:689] (0/2) Epoch 24, batch 400, loss[loss=0.06788, simple_loss=0.1138, pruned_loss=0.01098, over 24174.00 frames. ], tot_loss[loss=0.07243, simple_loss=0.1256, pruned_loss=0.009625, over 4178286.69 frames. ], batch size: 363, lr: 1.27e-02, grad_scale: 32.0 2024-03-16 00:44:26,891 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.balancer1.prob, batch_count=40203.333333333336, ans=0.125 2024-03-16 00:44:40,717 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.ff2_skip_rate, batch_count=40270.0, ans=0.002115217391304349 2024-03-16 00:44:46,527 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=192, metric=6.42 vs. limit=15.0 2024-03-16 00:44:56,783 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer2.prob, batch_count=40303.333333333336, ans=0.125 2024-03-16 00:44:57,842 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.attention_skip_rate, batch_count=40303.333333333336, ans=0.0 2024-03-16 00:45:01,884 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module2.balancer2.prob, batch_count=40303.333333333336, ans=0.125 2024-03-16 00:45:08,191 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.conv.2.prob, batch_count=40336.666666666664, ans=0.125 2024-03-16 00:45:19,290 INFO [train_char.py:689] (0/2) Epoch 24, batch 450, loss[loss=0.08383, simple_loss=0.1485, pruned_loss=0.009566, over 24104.00 frames. ], tot_loss[loss=0.07332, simple_loss=0.1271, pruned_loss=0.009756, over 4324827.02 frames. ], batch size: 199, lr: 1.27e-02, grad_scale: 32.0 2024-03-16 00:45:38,670 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.self_attn1.whiten.whitening_limit, batch_count=40403.333333333336, ans=22.5 2024-03-16 00:45:40,471 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.338e+01 9.125e+01 1.057e+02 1.365e+02 2.938e+02, threshold=2.113e+02, percent-clipped=6.0 2024-03-16 00:46:04,503 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=40470.0, ans=0.1 2024-03-16 00:46:25,754 INFO [train_char.py:689] (0/2) Epoch 24, batch 500, loss[loss=0.08452, simple_loss=0.1424, pruned_loss=0.01331, over 24134.00 frames. ], tot_loss[loss=0.07491, simple_loss=0.1298, pruned_loss=0.01001, over 4436096.18 frames. ], batch size: 251, lr: 1.26e-02, grad_scale: 32.0 2024-03-16 00:46:25,989 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_skip_rate, batch_count=40536.666666666664, ans=0.0 2024-03-16 00:46:34,642 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-24.pt 2024-03-16 00:47:26,531 INFO [train_char.py:689] (0/2) Epoch 25, batch 0, loss[loss=0.07205, simple_loss=0.1257, pruned_loss=0.009199, over 24301.00 frames. ], tot_loss[loss=0.07205, simple_loss=0.1257, pruned_loss=0.009199, over 24301.00 frames. ], batch size: 146, lr: 1.24e-02, grad_scale: 32.0 2024-03-16 00:47:26,532 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 00:47:40,440 INFO [train_char.py:721] (0/2) Epoch 25, validation: loss=0.06241, simple_loss=0.1148, pruned_loss=0.005015, over 657665.00 frames. 2024-03-16 00:47:40,440 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 00:48:20,499 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=40660.0, ans=0.1 2024-03-16 00:48:25,044 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module1.balancer1.prob, batch_count=40660.0, ans=0.125 2024-03-16 00:48:30,480 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=40660.0, ans=0.1 2024-03-16 00:48:47,655 INFO [train_char.py:689] (0/2) Epoch 25, batch 50, loss[loss=0.07141, simple_loss=0.1232, pruned_loss=0.009785, over 21567.00 frames. ], tot_loss[loss=0.07059, simple_loss=0.1232, pruned_loss=0.008998, over 1077815.22 frames. ], batch size: 86, lr: 1.24e-02, grad_scale: 32.0 2024-03-16 00:49:03,486 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.592e+01 8.746e+01 1.014e+02 1.328e+02 3.091e+02, threshold=2.027e+02, percent-clipped=5.0 2024-03-16 00:49:11,795 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=40760.0, ans=0.0 2024-03-16 00:49:40,824 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer2.prob, batch_count=40826.666666666664, ans=0.125 2024-03-16 00:49:58,912 INFO [train_char.py:689] (0/2) Epoch 25, batch 100, loss[loss=0.07187, simple_loss=0.1237, pruned_loss=0.01, over 24370.00 frames. ], tot_loss[loss=0.07121, simple_loss=0.1238, pruned_loss=0.009301, over 1903923.63 frames. ], batch size: 152, lr: 1.23e-02, grad_scale: 32.0 2024-03-16 00:50:30,320 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass_mid.scale_min, batch_count=40960.0, ans=0.2 2024-03-16 00:50:36,660 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.balancer.min_positive, batch_count=40993.333333333336, ans=0.05 2024-03-16 00:51:03,086 INFO [train_char.py:689] (0/2) Epoch 25, batch 150, loss[loss=0.06389, simple_loss=0.1132, pruned_loss=0.007309, over 24312.00 frames. ], tot_loss[loss=0.07036, simple_loss=0.1229, pruned_loss=0.008909, over 2552591.58 frames. ], batch size: 146, lr: 1.23e-02, grad_scale: 32.0 2024-03-16 00:51:13,474 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.183e+01 8.559e+01 1.034e+02 1.250e+02 2.777e+02, threshold=2.067e+02, percent-clipped=6.0 2024-03-16 00:51:33,142 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 00:51:33,143 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=41126.666666666664, ans=0.1 2024-03-16 00:51:50,737 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=256, metric=7.73 vs. limit=15.0 2024-03-16 00:51:53,981 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.2.self_attn_weights, loss-sum=0.000e+00 2024-03-16 00:52:04,068 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=41193.333333333336, ans=0.125 2024-03-16 00:52:10,402 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=41193.333333333336, ans=0.125 2024-03-16 00:52:15,440 INFO [train_char.py:689] (0/2) Epoch 25, batch 200, loss[loss=0.06437, simple_loss=0.1143, pruned_loss=0.007222, over 24271.00 frames. ], tot_loss[loss=0.07133, simple_loss=0.1246, pruned_loss=0.00904, over 3052082.14 frames. ], batch size: 146, lr: 1.23e-02, grad_scale: 32.0 2024-03-16 00:52:15,764 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.2.self_attn_weights, loss-sum=0.000e+00 2024-03-16 00:52:16,172 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.self_attn_weights.whiten_keys, num_groups=8, num_channels=256, metric=5.87 vs. limit=6.0 2024-03-16 00:52:32,149 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.nonlin_attention.balancer.max_positive, batch_count=41260.0, ans=0.95 2024-03-16 00:52:56,388 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff2_skip_rate, batch_count=41326.666666666664, ans=0.0018855072463768129 2024-03-16 00:53:00,314 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.balancer_na.min_abs, batch_count=41326.666666666664, ans=0.02 2024-03-16 00:53:02,959 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer2.prob, batch_count=41326.666666666664, ans=0.125 2024-03-16 00:53:19,645 INFO [train_char.py:689] (0/2) Epoch 25, batch 250, loss[loss=0.06527, simple_loss=0.1178, pruned_loss=0.006359, over 24299.00 frames. ], tot_loss[loss=0.07173, simple_loss=0.1255, pruned_loss=0.008968, over 3444278.86 frames. ], batch size: 140, lr: 1.23e-02, grad_scale: 32.0 2024-03-16 00:53:29,919 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.523e+01 9.981e+01 1.281e+02 1.798e+02 3.331e+02, threshold=2.561e+02, percent-clipped=15.0 2024-03-16 00:53:36,510 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass_mid.scale_min, batch_count=41426.666666666664, ans=0.2 2024-03-16 00:53:36,573 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.attention_skip_rate, batch_count=41426.666666666664, ans=0.0 2024-03-16 00:54:11,125 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.bypass.skip_rate, batch_count=41526.666666666664, ans=0.07 2024-03-16 00:54:14,758 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass.scale_min, batch_count=41526.666666666664, ans=0.2 2024-03-16 00:54:17,356 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=41526.666666666664, ans=0.0 2024-03-16 00:54:17,381 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward2.hidden_balancer.prob, batch_count=41526.666666666664, ans=0.125 2024-03-16 00:54:27,373 INFO [train_char.py:689] (0/2) Epoch 25, batch 300, loss[loss=0.06368, simple_loss=0.1143, pruned_loss=0.006525, over 24196.00 frames. ], tot_loss[loss=0.07216, simple_loss=0.1262, pruned_loss=0.009044, over 3751764.34 frames. ], batch size: 122, lr: 1.23e-02, grad_scale: 32.0 2024-03-16 00:55:13,907 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module2.balancer1.min_positive, batch_count=41660.0, ans=0.025 2024-03-16 00:55:16,319 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module1.balancer2.prob, batch_count=41660.0, ans=0.125 2024-03-16 00:55:20,338 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.scale_min, batch_count=41693.333333333336, ans=0.2 2024-03-16 00:55:27,117 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.self_attn2.whiten, num_groups=1, num_channels=512, metric=13.29 vs. limit=22.5 2024-03-16 00:55:29,860 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.conv_module1.whiten, num_groups=1, num_channels=192, metric=9.45 vs. limit=15.0 2024-03-16 00:55:30,465 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=41693.333333333336, ans=0.1 2024-03-16 00:55:33,934 INFO [train_char.py:689] (0/2) Epoch 25, batch 350, loss[loss=0.08435, simple_loss=0.1488, pruned_loss=0.00994, over 24151.00 frames. ], tot_loss[loss=0.07248, simple_loss=0.1267, pruned_loss=0.009147, over 3994206.43 frames. ], batch size: 223, lr: 1.23e-02, grad_scale: 32.0 2024-03-16 00:55:35,739 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=13.25 vs. limit=15.0 2024-03-16 00:55:43,449 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=8, num_channels=256, metric=5.69 vs. limit=6.0 2024-03-16 00:55:43,857 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.725e+01 9.360e+01 1.127e+02 1.468e+02 2.530e+02, threshold=2.255e+02, percent-clipped=0.0 2024-03-16 00:56:00,763 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=41793.333333333336, ans=0.1 2024-03-16 00:56:14,648 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=41826.666666666664, ans=0.125 2024-03-16 00:56:27,900 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module1.balancer1.prob, batch_count=41860.0, ans=0.125 2024-03-16 00:56:27,906 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=41860.0, ans=0.0 2024-03-16 00:56:37,320 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 00:56:40,668 INFO [train_char.py:689] (0/2) Epoch 25, batch 400, loss[loss=0.07812, simple_loss=0.1384, pruned_loss=0.008904, over 24153.00 frames. ], tot_loss[loss=0.07331, simple_loss=0.1279, pruned_loss=0.009379, over 4180166.79 frames. ], batch size: 188, lr: 1.22e-02, grad_scale: 32.0 2024-03-16 00:56:40,883 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward2.hidden_balancer.prob, batch_count=41893.333333333336, ans=0.125 2024-03-16 00:56:52,094 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module1.balancer2.prob, batch_count=41926.666666666664, ans=0.125 2024-03-16 00:57:09,585 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=41960.0, ans=0.125 2024-03-16 00:57:14,155 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=8.22 vs. limit=15.0 2024-03-16 00:57:15,977 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass_mid.scale_min, batch_count=41960.0, ans=0.2 2024-03-16 00:57:20,393 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_skip_rate, batch_count=41993.333333333336, ans=0.0 2024-03-16 00:57:25,441 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module2.balancer2.prob, batch_count=41993.333333333336, ans=0.125 2024-03-16 00:57:28,129 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module1.balancer2.prob, batch_count=41993.333333333336, ans=0.125 2024-03-16 00:57:32,021 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=42026.666666666664, ans=0.1 2024-03-16 00:57:35,912 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=42026.666666666664, ans=0.0 2024-03-16 00:57:38,347 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass.skip_rate, batch_count=42026.666666666664, ans=0.035 2024-03-16 00:57:42,793 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=512, metric=4.80 vs. limit=15.0 2024-03-16 00:57:46,908 INFO [train_char.py:689] (0/2) Epoch 25, batch 450, loss[loss=0.07839, simple_loss=0.1392, pruned_loss=0.008798, over 24235.00 frames. ], tot_loss[loss=0.07391, simple_loss=0.1289, pruned_loss=0.009466, over 4325617.13 frames. ], batch size: 212, lr: 1.22e-02, grad_scale: 32.0 2024-03-16 00:57:55,840 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=42060.0, ans=0.0 2024-03-16 00:57:55,961 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer2.prob, batch_count=42060.0, ans=0.125 2024-03-16 00:57:56,973 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.340e+01 9.649e+01 1.189e+02 1.567e+02 2.547e+02, threshold=2.378e+02, percent-clipped=5.0 2024-03-16 00:58:01,117 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 00:58:03,584 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff2_skip_rate, batch_count=42093.333333333336, ans=0.0017188405797101455 2024-03-16 00:58:03,626 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.balancer2.prob, batch_count=42093.333333333336, ans=0.125 2024-03-16 00:58:05,384 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=512, metric=11.59 vs. limit=15.0 2024-03-16 00:58:40,007 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.balancer1.prob, batch_count=42193.333333333336, ans=0.125 2024-03-16 00:58:50,815 INFO [train_char.py:689] (0/2) Epoch 25, batch 500, loss[loss=0.07466, simple_loss=0.1318, pruned_loss=0.008758, over 24113.00 frames. ], tot_loss[loss=0.07433, simple_loss=0.1296, pruned_loss=0.00954, over 4438406.83 frames. ], batch size: 279, lr: 1.22e-02, grad_scale: 32.0 2024-03-16 00:59:00,095 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-25.pt 2024-03-16 00:59:51,608 INFO [train_char.py:689] (0/2) Epoch 26, batch 0, loss[loss=0.08124, simple_loss=0.1372, pruned_loss=0.01262, over 24092.00 frames. ], tot_loss[loss=0.08124, simple_loss=0.1372, pruned_loss=0.01262, over 24092.00 frames. ], batch size: 236, lr: 1.20e-02, grad_scale: 32.0 2024-03-16 00:59:51,609 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 01:00:05,300 INFO [train_char.py:721] (0/2) Epoch 26, validation: loss=0.06222, simple_loss=0.1142, pruned_loss=0.005137, over 657665.00 frames. 2024-03-16 01:00:05,301 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 01:00:09,705 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer2.prob, batch_count=42250.0, ans=0.125 2024-03-16 01:00:16,537 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.ff3_skip_rate, batch_count=42250.0, ans=0.0016847826086956522 2024-03-16 01:00:23,080 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.ff2_skip_rate, batch_count=42283.333333333336, ans=0.0016775362318840574 2024-03-16 01:00:23,565 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=512, metric=13.54 vs. limit=15.0 2024-03-16 01:00:45,341 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.ff3_skip_rate, batch_count=42316.666666666664, ans=0.0016702898550724644 2024-03-16 01:01:22,068 INFO [train_char.py:689] (0/2) Epoch 26, batch 50, loss[loss=0.06182, simple_loss=0.1088, pruned_loss=0.007435, over 24039.00 frames. ], tot_loss[loss=0.07016, simple_loss=0.1232, pruned_loss=0.008553, over 1089617.84 frames. ], batch size: 361, lr: 1.19e-02, grad_scale: 32.0 2024-03-16 01:01:23,442 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.542e+01 9.054e+01 1.032e+02 1.316e+02 2.856e+02, threshold=2.065e+02, percent-clipped=1.0 2024-03-16 01:01:31,935 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=42416.666666666664, ans=0.125 2024-03-16 01:01:34,870 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=42450.0, ans=0.1 2024-03-16 01:01:34,892 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass.skip_rate, batch_count=42450.0, ans=0.04949747468305833 2024-03-16 01:02:27,626 INFO [train_char.py:689] (0/2) Epoch 26, batch 100, loss[loss=0.08078, simple_loss=0.1425, pruned_loss=0.009547, over 24284.00 frames. ], tot_loss[loss=0.07089, simple_loss=0.125, pruned_loss=0.008404, over 1921913.94 frames. ], batch size: 212, lr: 1.19e-02, grad_scale: 32.0 2024-03-16 01:02:49,092 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=11.96 vs. limit=15.0 2024-03-16 01:03:34,750 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=16.76 vs. limit=22.5 2024-03-16 01:03:37,629 INFO [train_char.py:689] (0/2) Epoch 26, batch 150, loss[loss=0.06391, simple_loss=0.1122, pruned_loss=0.007829, over 24257.00 frames. ], tot_loss[loss=0.0695, simple_loss=0.1223, pruned_loss=0.008367, over 2564152.50 frames. ], batch size: 134, lr: 1.19e-02, grad_scale: 32.0 2024-03-16 01:03:38,868 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.214e+01 8.616e+01 1.049e+02 1.357e+02 3.119e+02, threshold=2.097e+02, percent-clipped=5.0 2024-03-16 01:03:48,679 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=14.10 vs. limit=22.5 2024-03-16 01:03:58,418 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=42783.333333333336, ans=0.1 2024-03-16 01:04:14,133 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=42816.666666666664, ans=0.125 2024-03-16 01:04:41,054 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass.scale_min, batch_count=42883.333333333336, ans=0.2 2024-03-16 01:04:47,234 INFO [train_char.py:689] (0/2) Epoch 26, batch 200, loss[loss=0.0707, simple_loss=0.1217, pruned_loss=0.009868, over 24405.00 frames. ], tot_loss[loss=0.06987, simple_loss=0.1224, pruned_loss=0.008661, over 3062845.40 frames. ], batch size: 152, lr: 1.19e-02, grad_scale: 64.0 2024-03-16 01:05:10,724 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=42950.0, ans=0.0 2024-03-16 01:05:28,233 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.attention_skip_rate, batch_count=43016.666666666664, ans=0.0 2024-03-16 01:05:34,617 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.attention_skip_rate, batch_count=43016.666666666664, ans=0.0 2024-03-16 01:05:54,188 INFO [train_char.py:689] (0/2) Epoch 26, batch 250, loss[loss=0.07259, simple_loss=0.1223, pruned_loss=0.01145, over 24159.00 frames. ], tot_loss[loss=0.07029, simple_loss=0.1235, pruned_loss=0.008538, over 3453761.76 frames. ], batch size: 344, lr: 1.19e-02, grad_scale: 64.0 2024-03-16 01:05:55,332 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.181e+01 9.839e+01 1.170e+02 1.591e+02 2.994e+02, threshold=2.341e+02, percent-clipped=13.0 2024-03-16 01:05:56,887 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=43083.333333333336, ans=0.1 2024-03-16 01:05:58,175 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer1.prob, batch_count=43083.333333333336, ans=0.125 2024-03-16 01:06:42,005 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward2.hidden_balancer.prob, batch_count=43183.333333333336, ans=0.125 2024-03-16 01:07:01,063 INFO [train_char.py:689] (0/2) Epoch 26, batch 300, loss[loss=0.06318, simple_loss=0.1141, pruned_loss=0.006129, over 24227.00 frames. ], tot_loss[loss=0.07127, simple_loss=0.1253, pruned_loss=0.008636, over 3755837.04 frames. ], batch size: 122, lr: 1.19e-02, grad_scale: 64.0 2024-03-16 01:07:05,250 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.hidden_balancer.prob, batch_count=43250.0, ans=0.125 2024-03-16 01:07:15,745 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.ff2_skip_rate, batch_count=43283.333333333336, ans=0.0014601449275362319 2024-03-16 01:07:34,289 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=7.28 vs. limit=15.0 2024-03-16 01:07:56,980 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.attention_skip_rate, batch_count=43383.333333333336, ans=0.0 2024-03-16 01:08:03,534 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=43383.333333333336, ans=0.125 2024-03-16 01:08:08,324 INFO [train_char.py:689] (0/2) Epoch 26, batch 350, loss[loss=0.06697, simple_loss=0.116, pruned_loss=0.008947, over 24340.00 frames. ], tot_loss[loss=0.07132, simple_loss=0.1255, pruned_loss=0.008586, over 3990239.59 frames. ], batch size: 158, lr: 1.18e-02, grad_scale: 64.0 2024-03-16 01:08:09,592 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.708e+01 9.030e+01 1.152e+02 1.439e+02 3.027e+02, threshold=2.303e+02, percent-clipped=2.0 2024-03-16 01:08:12,446 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_skip_rate, batch_count=43416.666666666664, ans=0.0 2024-03-16 01:08:14,697 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=43416.666666666664, ans=0.1 2024-03-16 01:08:14,729 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.attention_skip_rate, batch_count=43416.666666666664, ans=0.0 2024-03-16 01:08:35,597 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.self_attn1.whiten, num_groups=1, num_channels=192, metric=13.81 vs. limit=22.5 2024-03-16 01:08:45,797 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.self_attn_weights.pos_emb_skip_rate, batch_count=43483.333333333336, ans=0.0 2024-03-16 01:08:53,865 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.whiten, num_groups=1, num_channels=512, metric=7.73 vs. limit=12.0 2024-03-16 01:09:13,258 INFO [train_char.py:689] (0/2) Epoch 26, batch 400, loss[loss=0.07368, simple_loss=0.1304, pruned_loss=0.008452, over 24325.00 frames. ], tot_loss[loss=0.0714, simple_loss=0.1255, pruned_loss=0.008634, over 4179082.02 frames. ], batch size: 180, lr: 1.18e-02, grad_scale: 64.0 2024-03-16 01:09:19,681 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass_mid.scale_min, batch_count=43583.333333333336, ans=0.2 2024-03-16 01:09:41,899 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=14.12 vs. limit=15.0 2024-03-16 01:09:41,925 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=13.24 vs. limit=15.0 2024-03-16 01:10:02,250 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=43683.333333333336, ans=0.1 2024-03-16 01:10:18,221 INFO [train_char.py:689] (0/2) Epoch 26, batch 450, loss[loss=0.08231, simple_loss=0.1432, pruned_loss=0.01071, over 24275.00 frames. ], tot_loss[loss=0.07232, simple_loss=0.1272, pruned_loss=0.008731, over 4324810.25 frames. ], batch size: 212, lr: 1.18e-02, grad_scale: 64.0 2024-03-16 01:10:19,472 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.036e+01 9.228e+01 1.049e+02 1.301e+02 2.571e+02, threshold=2.097e+02, percent-clipped=3.0 2024-03-16 01:10:20,208 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=13.05 vs. limit=15.0 2024-03-16 01:10:23,675 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module2.balancer1.min_positive, batch_count=43750.0, ans=0.025 2024-03-16 01:10:25,875 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module2.balancer2.prob, batch_count=43750.0, ans=0.125 2024-03-16 01:11:16,749 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.bypass.skip_rate, batch_count=43883.333333333336, ans=0.07 2024-03-16 01:11:21,378 INFO [train_char.py:689] (0/2) Epoch 26, batch 500, loss[loss=0.07904, simple_loss=0.1371, pruned_loss=0.01047, over 24087.00 frames. ], tot_loss[loss=0.07357, simple_loss=0.1292, pruned_loss=0.008972, over 4437599.85 frames. ], batch size: 236, lr: 1.18e-02, grad_scale: 64.0 2024-03-16 01:11:22,362 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.conv_module1.whiten, num_groups=1, num_channels=192, metric=5.86 vs. limit=15.0 2024-03-16 01:11:28,585 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=3.92 vs. limit=15.0 2024-03-16 01:11:30,641 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-26.pt 2024-03-16 01:12:22,592 INFO [train_char.py:689] (0/2) Epoch 27, batch 0, loss[loss=0.07252, simple_loss=0.1295, pruned_loss=0.007777, over 24316.00 frames. ], tot_loss[loss=0.07252, simple_loss=0.1295, pruned_loss=0.007777, over 24316.00 frames. ], batch size: 140, lr: 1.16e-02, grad_scale: 64.0 2024-03-16 01:12:22,593 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 01:12:36,246 INFO [train_char.py:721] (0/2) Epoch 27, validation: loss=0.06174, simple_loss=0.1133, pruned_loss=0.005092, over 657665.00 frames. 2024-03-16 01:12:36,247 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 01:13:17,384 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=44006.666666666664, ans=0.125 2024-03-16 01:13:18,750 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=44040.0, ans=0.125 2024-03-16 01:13:20,551 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.conv_module2.whiten, num_groups=1, num_channels=512, metric=5.23 vs. limit=15.0 2024-03-16 01:13:21,847 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=9.37 vs. limit=15.0 2024-03-16 01:13:24,000 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=44040.0, ans=0.125 2024-03-16 01:13:28,088 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer2.prob, batch_count=44040.0, ans=0.125 2024-03-16 01:13:34,768 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.out_combiner.scale_min, batch_count=44073.333333333336, ans=0.2 2024-03-16 01:13:38,430 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.805e+01 9.058e+01 1.114e+02 1.474e+02 2.719e+02, threshold=2.229e+02, percent-clipped=8.0 2024-03-16 01:13:40,696 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.conv_module1.whiten, num_groups=1, num_channels=512, metric=3.64 vs. limit=15.0 2024-03-16 01:13:46,524 INFO [train_char.py:689] (0/2) Epoch 27, batch 50, loss[loss=0.06746, simple_loss=0.1202, pruned_loss=0.007347, over 24186.00 frames. ], tot_loss[loss=0.06911, simple_loss=0.1216, pruned_loss=0.00831, over 1087381.11 frames. ], batch size: 311, lr: 1.15e-02, grad_scale: 64.0 2024-03-16 01:14:03,197 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.conv_module2.whiten, num_groups=1, num_channels=192, metric=7.25 vs. limit=15.0 2024-03-16 01:14:29,858 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=44173.333333333336, ans=0.125 2024-03-16 01:14:40,366 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer1.min_positive, batch_count=44206.666666666664, ans=0.025 2024-03-16 01:14:58,425 INFO [train_char.py:689] (0/2) Epoch 27, batch 100, loss[loss=0.06964, simple_loss=0.1223, pruned_loss=0.008472, over 24400.00 frames. ], tot_loss[loss=0.06982, simple_loss=0.1232, pruned_loss=0.00824, over 1914134.71 frames. ], batch size: 165, lr: 1.15e-02, grad_scale: 64.0 2024-03-16 01:15:14,521 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.conv_module1.whiten, num_groups=1, num_channels=384, metric=2.84 vs. limit=15.0 2024-03-16 01:15:20,579 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=44306.666666666664, ans=0.125 2024-03-16 01:15:26,804 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=44340.0, ans=0.1 2024-03-16 01:15:42,199 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.ff3_skip_rate, batch_count=44373.333333333336, ans=0.0012231884057971008 2024-03-16 01:15:48,471 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 01:15:58,694 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.926e+01 8.384e+01 1.010e+02 1.337e+02 2.347e+02, threshold=2.019e+02, percent-clipped=1.0 2024-03-16 01:16:06,524 INFO [train_char.py:689] (0/2) Epoch 27, batch 150, loss[loss=0.06316, simple_loss=0.1086, pruned_loss=0.008877, over 23959.00 frames. ], tot_loss[loss=0.07057, simple_loss=0.1247, pruned_loss=0.008233, over 2557317.28 frames. ], batch size: 407, lr: 1.15e-02, grad_scale: 64.0 2024-03-16 01:16:08,073 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.balancer1.prob, batch_count=44440.0, ans=0.125 2024-03-16 01:16:38,307 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module1.balancer2.prob, batch_count=44506.666666666664, ans=0.125 2024-03-16 01:16:43,641 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.bypass.skip_rate, batch_count=44540.0, ans=0.07 2024-03-16 01:17:02,871 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=44573.333333333336, ans=0.125 2024-03-16 01:17:06,590 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=44573.333333333336, ans=0.1 2024-03-16 01:17:09,540 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=8.95 vs. limit=15.0 2024-03-16 01:17:10,143 INFO [train_char.py:689] (0/2) Epoch 27, batch 200, loss[loss=0.06368, simple_loss=0.1057, pruned_loss=0.01082, over 24277.00 frames. ], tot_loss[loss=0.0704, simple_loss=0.1243, pruned_loss=0.008269, over 3060282.46 frames. ], batch size: 140, lr: 1.15e-02, grad_scale: 32.0 2024-03-16 01:17:48,783 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=5.97 vs. limit=10.0 2024-03-16 01:17:51,930 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module1.balancer2.prob, batch_count=44706.666666666664, ans=0.125 2024-03-16 01:18:02,482 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=11.37 vs. limit=15.0 2024-03-16 01:18:03,370 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 01:18:11,979 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.091e+01 8.642e+01 1.018e+02 1.321e+02 3.556e+02, threshold=2.035e+02, percent-clipped=6.0 2024-03-16 01:18:18,392 INFO [train_char.py:689] (0/2) Epoch 27, batch 250, loss[loss=0.06865, simple_loss=0.1201, pruned_loss=0.008603, over 24393.00 frames. ], tot_loss[loss=0.07083, simple_loss=0.1248, pruned_loss=0.008417, over 3450104.90 frames. ], batch size: 158, lr: 1.15e-02, grad_scale: 32.0 2024-03-16 01:18:40,597 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=11.07 vs. limit=22.5 2024-03-16 01:18:44,173 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_skip_rate, batch_count=44806.666666666664, ans=0.0 2024-03-16 01:19:02,200 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=384, metric=14.40 vs. limit=22.5 2024-03-16 01:19:10,467 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.attention_skip_rate, batch_count=44873.333333333336, ans=0.0 2024-03-16 01:19:23,569 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=17.08 vs. limit=22.5 2024-03-16 01:19:24,509 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer_ff2.min_abs, batch_count=44940.0, ans=0.1 2024-03-16 01:19:25,550 INFO [train_char.py:689] (0/2) Epoch 27, batch 300, loss[loss=0.06663, simple_loss=0.1152, pruned_loss=0.009011, over 24393.00 frames. ], tot_loss[loss=0.07103, simple_loss=0.1251, pruned_loss=0.008458, over 3755513.93 frames. ], batch size: 158, lr: 1.15e-02, grad_scale: 32.0 2024-03-16 01:19:28,791 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.nonlin_attention.whiten2, num_groups=1, num_channels=512, metric=6.05 vs. limit=15.0 2024-03-16 01:19:33,331 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.bypass.scale_min, batch_count=44940.0, ans=0.2 2024-03-16 01:20:00,926 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.ff2_skip_rate, batch_count=45006.666666666664, ans=0.0010855072463768125 2024-03-16 01:20:10,019 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=45040.0, ans=0.1 2024-03-16 01:20:13,987 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.ff2_skip_rate, batch_count=45040.0, ans=0.001078260869565216 2024-03-16 01:20:25,182 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.367e+01 8.387e+01 1.017e+02 1.390e+02 3.075e+02, threshold=2.034e+02, percent-clipped=7.0 2024-03-16 01:20:26,684 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=45073.333333333336, ans=0.125 2024-03-16 01:20:28,014 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff3_skip_rate, batch_count=45073.333333333336, ans=0.001071014492753623 2024-03-16 01:20:29,397 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.3.self_attn_weights, loss-sum=0.000e+00 2024-03-16 01:20:31,659 INFO [train_char.py:689] (0/2) Epoch 27, batch 350, loss[loss=0.05992, simple_loss=0.1043, pruned_loss=0.00778, over 23922.00 frames. ], tot_loss[loss=0.07129, simple_loss=0.1256, pruned_loss=0.008476, over 3995776.69 frames. ], batch size: 407, lr: 1.14e-02, grad_scale: 32.0 2024-03-16 01:20:41,533 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=45106.666666666664, ans=0.125 2024-03-16 01:20:43,971 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_skip_rate, batch_count=45106.666666666664, ans=0.0 2024-03-16 01:20:45,305 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.balancer2.prob, batch_count=45140.0, ans=0.125 2024-03-16 01:21:04,417 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=45173.333333333336, ans=0.1 2024-03-16 01:21:09,214 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=45173.333333333336, ans=0.125 2024-03-16 01:21:16,929 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff3_skip_rate, batch_count=45206.666666666664, ans=0.0010420289855072474 2024-03-16 01:21:17,209 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward2.out_whiten.whitening_limit, batch_count=45206.666666666664, ans=15.0 2024-03-16 01:21:25,023 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=45240.0, ans=0.0 2024-03-16 01:21:38,240 INFO [train_char.py:689] (0/2) Epoch 27, batch 400, loss[loss=0.08665, simple_loss=0.1464, pruned_loss=0.01346, over 24049.00 frames. ], tot_loss[loss=0.0715, simple_loss=0.1261, pruned_loss=0.008467, over 4182776.05 frames. ], batch size: 236, lr: 1.14e-02, grad_scale: 32.0 2024-03-16 01:21:40,969 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer2.prob, batch_count=45273.333333333336, ans=0.125 2024-03-16 01:21:48,031 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer1.max_abs, batch_count=45273.333333333336, ans=10.0 2024-03-16 01:21:58,151 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_skip_rate, batch_count=45306.666666666664, ans=0.0 2024-03-16 01:22:22,337 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=45373.333333333336, ans=0.125 2024-03-16 01:22:23,476 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=45373.333333333336, ans=0.1 2024-03-16 01:22:36,482 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.967e+01 8.648e+01 1.013e+02 1.374e+02 2.811e+02, threshold=2.025e+02, percent-clipped=5.0 2024-03-16 01:22:42,387 INFO [train_char.py:689] (0/2) Epoch 27, batch 450, loss[loss=0.05714, simple_loss=0.1025, pruned_loss=0.005868, over 24034.00 frames. ], tot_loss[loss=0.07147, simple_loss=0.1262, pruned_loss=0.008384, over 4328333.59 frames. ], batch size: 381, lr: 1.14e-02, grad_scale: 32.0 2024-03-16 01:22:48,702 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_skip_rate, batch_count=45440.0, ans=0.0 2024-03-16 01:22:54,161 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=15.81 vs. limit=15.0 2024-03-16 01:22:56,155 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=45473.333333333336, ans=0.0 2024-03-16 01:22:57,222 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=45473.333333333336, ans=0.0 2024-03-16 01:23:03,318 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.attention_skip_rate, batch_count=45473.333333333336, ans=0.0 2024-03-16 01:23:16,581 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer2.prob, batch_count=45506.666666666664, ans=0.125 2024-03-16 01:23:39,698 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=45573.333333333336, ans=0.1 2024-03-16 01:23:44,865 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_skip_rate, batch_count=45606.666666666664, ans=0.0 2024-03-16 01:23:45,812 INFO [train_char.py:689] (0/2) Epoch 27, batch 500, loss[loss=0.0753, simple_loss=0.1316, pruned_loss=0.009481, over 24156.00 frames. ], tot_loss[loss=0.07267, simple_loss=0.1283, pruned_loss=0.00852, over 4441177.42 frames. ], batch size: 188, lr: 1.14e-02, grad_scale: 32.0 2024-03-16 01:23:52,121 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module1.balancer1.prob, batch_count=45606.666666666664, ans=0.125 2024-03-16 01:23:54,607 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-27.pt 2024-03-16 01:24:49,602 INFO [train_char.py:689] (0/2) Epoch 28, batch 0, loss[loss=0.07045, simple_loss=0.1239, pruned_loss=0.008482, over 24440.00 frames. ], tot_loss[loss=0.07045, simple_loss=0.1239, pruned_loss=0.008482, over 24440.00 frames. ], batch size: 158, lr: 1.12e-02, grad_scale: 32.0 2024-03-16 01:24:49,603 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 01:25:03,337 INFO [train_char.py:721] (0/2) Epoch 28, validation: loss=0.06111, simple_loss=0.1123, pruned_loss=0.00498, over 657665.00 frames. 2024-03-16 01:25:03,338 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 01:25:28,986 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.attention_skip_rate, batch_count=45663.333333333336, ans=0.0 2024-03-16 01:25:29,553 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=512, metric=3.25 vs. limit=15.0 2024-03-16 01:25:38,662 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff3_skip_rate, batch_count=45696.666666666664, ans=0.0009355072463768117 2024-03-16 01:25:48,488 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer1.prob, batch_count=45730.0, ans=0.125 2024-03-16 01:25:59,951 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.629e+01 8.506e+01 9.862e+01 1.220e+02 2.471e+02, threshold=1.972e+02, percent-clipped=3.0 2024-03-16 01:26:00,907 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=15.12 vs. limit=15.0 2024-03-16 01:26:16,553 INFO [train_char.py:689] (0/2) Epoch 28, batch 50, loss[loss=0.05196, simple_loss=0.07934, pruned_loss=0.01229, over 22582.00 frames. ], tot_loss[loss=0.06661, simple_loss=0.1167, pruned_loss=0.008267, over 1083125.45 frames. ], batch size: 483, lr: 1.12e-02, grad_scale: 32.0 2024-03-16 01:26:25,069 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=45796.666666666664, ans=0.125 2024-03-16 01:26:46,237 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.attention_skip_rate, batch_count=45863.333333333336, ans=0.0 2024-03-16 01:26:46,248 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=45863.333333333336, ans=0.0 2024-03-16 01:26:54,873 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=5.00 vs. limit=12.0 2024-03-16 01:27:26,368 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_skip_rate, batch_count=45963.333333333336, ans=0.0 2024-03-16 01:27:27,339 INFO [train_char.py:689] (0/2) Epoch 28, batch 100, loss[loss=0.06695, simple_loss=0.1166, pruned_loss=0.008664, over 24389.00 frames. ], tot_loss[loss=0.0669, simple_loss=0.1175, pruned_loss=0.008148, over 1908090.46 frames. ], batch size: 158, lr: 1.12e-02, grad_scale: 32.0 2024-03-16 01:28:09,752 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.out_combiner.scale_min, batch_count=46063.333333333336, ans=0.2 2024-03-16 01:28:17,000 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.871e+01 8.880e+01 1.076e+02 1.571e+02 3.203e+02, threshold=2.152e+02, percent-clipped=12.0 2024-03-16 01:28:36,663 INFO [train_char.py:689] (0/2) Epoch 28, batch 150, loss[loss=0.06147, simple_loss=0.1154, pruned_loss=0.003744, over 24191.00 frames. ], tot_loss[loss=0.06712, simple_loss=0.1189, pruned_loss=0.007685, over 2554519.42 frames. ], batch size: 122, lr: 1.11e-02, grad_scale: 32.0 2024-03-16 01:29:04,728 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.attention_skip_rate, batch_count=46196.666666666664, ans=0.0 2024-03-16 01:29:21,563 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=46230.0, ans=0.1 2024-03-16 01:29:39,754 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=46296.666666666664, ans=0.125 2024-03-16 01:29:40,791 INFO [train_char.py:689] (0/2) Epoch 28, batch 200, loss[loss=0.06534, simple_loss=0.1181, pruned_loss=0.006312, over 24432.00 frames. ], tot_loss[loss=0.06817, simple_loss=0.1206, pruned_loss=0.007878, over 3052230.04 frames. ], batch size: 165, lr: 1.11e-02, grad_scale: 32.0 2024-03-16 01:29:54,243 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_skip_rate, batch_count=46330.0, ans=0.0 2024-03-16 01:30:28,935 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=46396.666666666664, ans=0.1 2024-03-16 01:30:33,633 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.302e+01 9.610e+01 1.209e+02 1.821e+02 3.486e+02, threshold=2.419e+02, percent-clipped=12.0 2024-03-16 01:30:49,074 INFO [train_char.py:689] (0/2) Epoch 28, batch 250, loss[loss=0.05807, simple_loss=0.1066, pruned_loss=0.004793, over 24237.00 frames. ], tot_loss[loss=0.06896, simple_loss=0.1219, pruned_loss=0.008035, over 3440595.70 frames. ], batch size: 146, lr: 1.11e-02, grad_scale: 32.0 2024-03-16 01:31:08,080 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer2.prob, batch_count=46496.666666666664, ans=0.125 2024-03-16 01:31:31,737 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.ff2_skip_rate, batch_count=46563.333333333336, ans=0.0007471014492753617 2024-03-16 01:31:57,418 INFO [train_char.py:689] (0/2) Epoch 28, batch 300, loss[loss=0.05637, simple_loss=0.107, pruned_loss=0.002887, over 24241.00 frames. ], tot_loss[loss=0.06902, simple_loss=0.1221, pruned_loss=0.007983, over 3742520.16 frames. ], batch size: 140, lr: 1.11e-02, grad_scale: 32.0 2024-03-16 01:32:00,652 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward2.out_whiten.whitening_limit, batch_count=46630.0, ans=15.0 2024-03-16 01:32:07,268 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=12.28 vs. limit=15.0 2024-03-16 01:32:11,999 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=46663.333333333336, ans=0.0 2024-03-16 01:32:26,898 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=46696.666666666664, ans=0.125 2024-03-16 01:32:26,935 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 01:32:47,803 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.647e+01 9.048e+01 1.125e+02 1.475e+02 2.737e+02, threshold=2.249e+02, percent-clipped=1.0 2024-03-16 01:32:50,964 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=384, metric=18.65 vs. limit=22.5 2024-03-16 01:33:01,777 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_skip_rate, batch_count=46796.666666666664, ans=0.0 2024-03-16 01:33:02,882 INFO [train_char.py:689] (0/2) Epoch 28, batch 350, loss[loss=0.06851, simple_loss=0.1223, pruned_loss=0.007344, over 24366.00 frames. ], tot_loss[loss=0.06946, simple_loss=0.1231, pruned_loss=0.007928, over 3987175.57 frames. ], batch size: 180, lr: 1.11e-02, grad_scale: 32.0 2024-03-16 01:33:13,582 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.whiten2.whitening_limit, batch_count=46796.666666666664, ans=15.0 2024-03-16 01:33:14,172 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.balancer_ff2.min_abs, batch_count=46830.0, ans=0.1 2024-03-16 01:33:37,954 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.balancer2.prob, batch_count=46863.333333333336, ans=0.125 2024-03-16 01:34:08,018 INFO [train_char.py:689] (0/2) Epoch 28, batch 400, loss[loss=0.07119, simple_loss=0.1265, pruned_loss=0.007921, over 24210.00 frames. ], tot_loss[loss=0.06975, simple_loss=0.1236, pruned_loss=0.00794, over 4177867.58 frames. ], batch size: 311, lr: 1.11e-02, grad_scale: 32.0 2024-03-16 01:34:08,254 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward2.hidden_balancer.prob, batch_count=46963.333333333336, ans=0.125 2024-03-16 01:34:15,650 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module2.balancer1.max_abs, batch_count=46963.333333333336, ans=10.0 2024-03-16 01:34:16,907 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.out_combiner.scale_min, batch_count=46963.333333333336, ans=0.2 2024-03-16 01:34:24,638 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.bypass.skip_rate, batch_count=46996.666666666664, ans=0.09899494936611666 2024-03-16 01:34:38,929 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=47030.0, ans=0.125 2024-03-16 01:34:40,239 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.balancer1.prob, batch_count=47030.0, ans=0.125 2024-03-16 01:34:53,549 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.skip_rate, batch_count=47063.333333333336, ans=0.04949747468305833 2024-03-16 01:34:54,707 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.scale_min, batch_count=47063.333333333336, ans=0.2 2024-03-16 01:34:54,732 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=47063.333333333336, ans=0.1 2024-03-16 01:34:58,200 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.909e+01 8.534e+01 1.061e+02 1.413e+02 2.296e+02, threshold=2.123e+02, percent-clipped=2.0 2024-03-16 01:34:59,664 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.scale_min, batch_count=47096.666666666664, ans=0.2 2024-03-16 01:35:12,863 INFO [train_char.py:689] (0/2) Epoch 28, batch 450, loss[loss=0.08293, simple_loss=0.1447, pruned_loss=0.01058, over 24045.00 frames. ], tot_loss[loss=0.07025, simple_loss=0.1243, pruned_loss=0.008084, over 4323845.46 frames. ], batch size: 236, lr: 1.10e-02, grad_scale: 32.0 2024-03-16 01:35:16,782 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_skip_rate, batch_count=47130.0, ans=0.0 2024-03-16 01:35:41,831 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=47196.666666666664, ans=0.1 2024-03-16 01:35:55,966 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=192, metric=9.27 vs. limit=15.0 2024-03-16 01:35:57,555 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.convnext.out_balancer.prob, batch_count=47230.0, ans=0.125 2024-03-16 01:36:03,943 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass_mid.scale_min, batch_count=47263.333333333336, ans=0.2 2024-03-16 01:36:16,876 INFO [train_char.py:689] (0/2) Epoch 28, batch 500, loss[loss=0.08057, simple_loss=0.143, pruned_loss=0.009085, over 24035.00 frames. ], tot_loss[loss=0.07147, simple_loss=0.1266, pruned_loss=0.008176, over 4436792.32 frames. ], batch size: 250, lr: 1.10e-02, grad_scale: 32.0 2024-03-16 01:36:18,439 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.scale_min, batch_count=47296.666666666664, ans=0.2 2024-03-16 01:36:23,419 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 01:36:25,828 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-28.pt 2024-03-16 01:37:20,878 INFO [train_char.py:689] (0/2) Epoch 29, batch 0, loss[loss=0.05955, simple_loss=0.1047, pruned_loss=0.007219, over 24018.00 frames. ], tot_loss[loss=0.05955, simple_loss=0.1047, pruned_loss=0.007219, over 24018.00 frames. ], batch size: 381, lr: 1.08e-02, grad_scale: 32.0 2024-03-16 01:37:20,879 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 01:37:34,416 INFO [train_char.py:721] (0/2) Epoch 29, validation: loss=0.06048, simple_loss=0.1117, pruned_loss=0.004651, over 657665.00 frames. 2024-03-16 01:37:34,416 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 01:38:04,141 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=384, metric=9.01 vs. limit=15.0 2024-03-16 01:38:16,945 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass.scale_min, batch_count=47386.666666666664, ans=0.2 2024-03-16 01:38:21,761 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.586e+01 8.663e+01 1.045e+02 1.445e+02 2.263e+02, threshold=2.091e+02, percent-clipped=4.0 2024-03-16 01:38:47,202 INFO [train_char.py:689] (0/2) Epoch 29, batch 50, loss[loss=0.05573, simple_loss=0.09839, pruned_loss=0.006538, over 24280.00 frames. ], tot_loss[loss=0.06574, simple_loss=0.1173, pruned_loss=0.00711, over 1088520.86 frames. ], batch size: 140, lr: 1.08e-02, grad_scale: 32.0 2024-03-16 01:38:53,877 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=47486.666666666664, ans=0.0 2024-03-16 01:39:16,991 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 01:39:26,138 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.balancer1.prob, batch_count=47553.333333333336, ans=0.125 2024-03-16 01:39:27,377 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=47553.333333333336, ans=0.1 2024-03-16 01:39:27,396 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module2.balancer1.prob, batch_count=47553.333333333336, ans=0.125 2024-03-16 01:39:27,884 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=512, metric=3.47 vs. limit=15.0 2024-03-16 01:39:41,577 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass_mid.scale_min, batch_count=47586.666666666664, ans=0.2 2024-03-16 01:39:55,128 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.ff3_skip_rate, batch_count=47620.0, ans=0.0005173913043478272 2024-03-16 01:39:56,370 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=47620.0, ans=0.0 2024-03-16 01:40:00,178 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=47620.0, ans=0.1 2024-03-16 01:40:03,732 INFO [train_char.py:689] (0/2) Epoch 29, batch 100, loss[loss=0.05637, simple_loss=0.1043, pruned_loss=0.004203, over 24281.00 frames. ], tot_loss[loss=0.06674, simple_loss=0.1185, pruned_loss=0.007478, over 1914555.59 frames. ], batch size: 116, lr: 1.08e-02, grad_scale: 32.0 2024-03-16 01:40:15,962 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=10.93 vs. limit=15.0 2024-03-16 01:40:17,396 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=12.87 vs. limit=15.0 2024-03-16 01:40:41,335 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.ff2_skip_rate, batch_count=47753.333333333336, ans=0.0004884057971014481 2024-03-16 01:40:43,680 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.750e+01 7.958e+01 9.896e+01 1.269e+02 2.677e+02, threshold=1.979e+02, percent-clipped=3.0 2024-03-16 01:41:08,051 INFO [train_char.py:689] (0/2) Epoch 29, batch 150, loss[loss=0.059, simple_loss=0.1061, pruned_loss=0.005928, over 24344.00 frames. ], tot_loss[loss=0.06752, simple_loss=0.1201, pruned_loss=0.007457, over 2557947.24 frames. ], batch size: 152, lr: 1.08e-02, grad_scale: 32.0 2024-03-16 01:41:13,479 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=47820.0, ans=0.125 2024-03-16 01:41:32,867 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 01:41:56,080 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=47920.0, ans=0.125 2024-03-16 01:41:58,611 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.balancer2.prob, batch_count=47953.333333333336, ans=0.125 2024-03-16 01:42:12,354 INFO [train_char.py:689] (0/2) Epoch 29, batch 200, loss[loss=0.06224, simple_loss=0.1162, pruned_loss=0.004163, over 24335.00 frames. ], tot_loss[loss=0.06709, simple_loss=0.1192, pruned_loss=0.007493, over 3058116.82 frames. ], batch size: 152, lr: 1.08e-02, grad_scale: 32.0 2024-03-16 01:42:24,798 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=256, metric=4.24 vs. limit=15.0 2024-03-16 01:42:46,475 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass_mid.scale_min, batch_count=48053.333333333336, ans=0.2 2024-03-16 01:42:53,595 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=192, metric=6.79 vs. limit=15.0 2024-03-16 01:42:56,551 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.774e+01 8.855e+01 1.142e+02 1.615e+02 2.916e+02, threshold=2.284e+02, percent-clipped=12.0 2024-03-16 01:43:10,809 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_skip_rate, batch_count=48120.0, ans=0.0 2024-03-16 01:43:24,817 INFO [train_char.py:689] (0/2) Epoch 29, batch 250, loss[loss=0.07921, simple_loss=0.1367, pruned_loss=0.01089, over 24082.00 frames. ], tot_loss[loss=0.06756, simple_loss=0.12, pruned_loss=0.007576, over 3447369.79 frames. ], batch size: 188, lr: 1.08e-02, grad_scale: 32.0 2024-03-16 01:43:32,762 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_skip_rate, batch_count=48153.333333333336, ans=0.0 2024-03-16 01:43:39,278 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.ff2_skip_rate, batch_count=48186.666666666664, ans=0.0003942028985507249 2024-03-16 01:43:50,676 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.scale_min, batch_count=48220.0, ans=0.2 2024-03-16 01:43:53,333 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=48220.0, ans=0.125 2024-03-16 01:43:59,617 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=48220.0, ans=0.1 2024-03-16 01:44:03,685 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 01:44:16,148 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.conv.2.prob, batch_count=48286.666666666664, ans=0.125 2024-03-16 01:44:29,022 INFO [train_char.py:689] (0/2) Epoch 29, batch 300, loss[loss=0.05759, simple_loss=0.1011, pruned_loss=0.007049, over 23930.00 frames. ], tot_loss[loss=0.06818, simple_loss=0.1211, pruned_loss=0.007643, over 3746023.83 frames. ], batch size: 407, lr: 1.07e-02, grad_scale: 32.0 2024-03-16 01:44:53,736 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass_mid.scale_min, batch_count=48353.333333333336, ans=0.2 2024-03-16 01:44:54,888 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.nonlin_attention.balancer.max_positive, batch_count=48353.333333333336, ans=0.95 2024-03-16 01:45:13,381 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.609e+01 9.105e+01 1.101e+02 1.405e+02 2.449e+02, threshold=2.201e+02, percent-clipped=1.0 2024-03-16 01:45:25,648 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.conv_module1.whiten, num_groups=1, num_channels=192, metric=12.80 vs. limit=15.0 2024-03-16 01:45:37,214 INFO [train_char.py:689] (0/2) Epoch 29, batch 350, loss[loss=0.06342, simple_loss=0.1053, pruned_loss=0.01075, over 23800.00 frames. ], tot_loss[loss=0.06885, simple_loss=0.122, pruned_loss=0.007845, over 3983703.40 frames. ], batch size: 439, lr: 1.07e-02, grad_scale: 32.0 2024-03-16 01:45:43,604 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.skip_rate, batch_count=48486.666666666664, ans=0.07 2024-03-16 01:45:46,286 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.ff2_skip_rate, batch_count=48486.666666666664, ans=0.00032898550724637723 2024-03-16 01:45:49,849 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 01:46:08,964 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.attention_skip_rate, batch_count=48553.333333333336, ans=0.0 2024-03-16 01:46:10,164 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.bypass.scale_min, batch_count=48553.333333333336, ans=0.2 2024-03-16 01:46:11,892 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=384, metric=3.11 vs. limit=15.0 2024-03-16 01:46:15,268 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.nonlin_attention.balancer.prob, batch_count=48586.666666666664, ans=0.125 2024-03-16 01:46:22,461 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.32 vs. limit=6.0 2024-03-16 01:46:35,741 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=48620.0, ans=0.1 2024-03-16 01:46:41,486 INFO [train_char.py:689] (0/2) Epoch 29, batch 400, loss[loss=0.07534, simple_loss=0.1301, pruned_loss=0.0103, over 24204.00 frames. ], tot_loss[loss=0.0695, simple_loss=0.1234, pruned_loss=0.007823, over 4174407.62 frames. ], batch size: 311, lr: 1.07e-02, grad_scale: 32.0 2024-03-16 01:46:52,111 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.scale_min, batch_count=48653.333333333336, ans=0.2 2024-03-16 01:46:53,440 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 01:47:02,297 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer1.prob, batch_count=48686.666666666664, ans=0.125 2024-03-16 01:47:06,117 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_skip_rate, batch_count=48686.666666666664, ans=0.0 2024-03-16 01:47:12,109 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer1.prob, batch_count=48720.0, ans=0.125 2024-03-16 01:47:21,782 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.696e+01 8.309e+01 9.654e+01 1.256e+02 2.583e+02, threshold=1.931e+02, percent-clipped=1.0 2024-03-16 01:47:31,021 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.ff3_skip_rate, batch_count=48753.333333333336, ans=0.0002710144927536226 2024-03-16 01:47:31,065 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=48753.333333333336, ans=0.1 2024-03-16 01:47:47,021 INFO [train_char.py:689] (0/2) Epoch 29, batch 450, loss[loss=0.08069, simple_loss=0.1393, pruned_loss=0.01104, over 24085.00 frames. ], tot_loss[loss=0.07082, simple_loss=0.1258, pruned_loss=0.007948, over 4322484.29 frames. ], batch size: 250, lr: 1.07e-02, grad_scale: 32.0 2024-03-16 01:48:13,146 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=48886.666666666664, ans=0.1 2024-03-16 01:48:20,717 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer2.prob, batch_count=48886.666666666664, ans=0.125 2024-03-16 01:48:31,950 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.ff2_skip_rate, batch_count=48920.0, ans=0.00023478260869565226 2024-03-16 01:48:45,124 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.convnext.out_balancer.prob, batch_count=48953.333333333336, ans=0.125 2024-03-16 01:48:51,897 INFO [train_char.py:689] (0/2) Epoch 29, batch 500, loss[loss=0.07645, simple_loss=0.1368, pruned_loss=0.008038, over 24097.00 frames. ], tot_loss[loss=0.07174, simple_loss=0.1275, pruned_loss=0.007969, over 4436092.86 frames. ], batch size: 199, lr: 1.07e-02, grad_scale: 32.0 2024-03-16 01:48:57,786 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.whiten, num_groups=1, num_channels=512, metric=8.14 vs. limit=12.0 2024-03-16 01:49:00,698 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-29.pt 2024-03-16 01:49:52,948 INFO [train_char.py:689] (0/2) Epoch 30, batch 0, loss[loss=0.06624, simple_loss=0.12, pruned_loss=0.006218, over 24393.00 frames. ], tot_loss[loss=0.06624, simple_loss=0.12, pruned_loss=0.006218, over 24393.00 frames. ], batch size: 180, lr: 1.05e-02, grad_scale: 32.0 2024-03-16 01:49:52,949 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 01:50:01,891 INFO [zipformer.py:1858] (0/2) name=encoder.encoders.2.encoder.layers.2.self_attn_weights, attn_weights_entropy = tensor([2.4830, 2.8947, 3.7264, 3.3743], device='cuda:0') 2024-03-16 01:50:04,558 INFO [zipformer.py:1858] (0/2) name=encoder.encoders.2.encoder.layers.2.self_attn_weights, attn_weights_entropy = tensor([2.8221, 3.0744, 3.9822, 3.7243], device='cuda:0') 2024-03-16 01:50:06,566 INFO [train_char.py:721] (0/2) Epoch 30, validation: loss=0.06041, simple_loss=0.1115, pruned_loss=0.00465, over 657665.00 frames. 2024-03-16 01:50:06,567 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 01:50:38,727 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.930e+01 8.448e+01 1.014e+02 1.293e+02 3.079e+02, threshold=2.028e+02, percent-clipped=8.0 2024-03-16 01:50:45,085 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.self_attn2.whiten, num_groups=1, num_channels=512, metric=15.98 vs. limit=22.5 2024-03-16 01:50:53,162 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.balancer.min_positive, batch_count=49110.0, ans=0.05 2024-03-16 01:51:20,468 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=49176.666666666664, ans=0.1 2024-03-16 01:51:21,499 INFO [train_char.py:689] (0/2) Epoch 30, batch 50, loss[loss=0.07206, simple_loss=0.1278, pruned_loss=0.008164, over 24204.00 frames. ], tot_loss[loss=0.06681, simple_loss=0.1189, pruned_loss=0.007356, over 1087138.80 frames. ], batch size: 311, lr: 1.05e-02, grad_scale: 32.0 2024-03-16 01:51:35,821 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.prob, batch_count=49176.666666666664, ans=0.125 2024-03-16 01:51:47,766 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 01:52:17,840 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.attention_skip_rate, batch_count=49310.0, ans=0.0 2024-03-16 01:52:30,943 INFO [train_char.py:689] (0/2) Epoch 30, batch 100, loss[loss=0.06501, simple_loss=0.111, pruned_loss=0.009522, over 24016.00 frames. ], tot_loss[loss=0.06736, simple_loss=0.12, pruned_loss=0.007372, over 1908686.95 frames. ], batch size: 381, lr: 1.05e-02, grad_scale: 32.0 2024-03-16 01:52:36,448 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module1.balancer1.max_abs, batch_count=49343.333333333336, ans=10.0 2024-03-16 01:53:01,917 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.123e+01 9.141e+01 1.242e+02 1.610e+02 3.460e+02, threshold=2.484e+02, percent-clipped=11.0 2024-03-16 01:53:14,651 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.self_attn2.whiten, num_groups=1, num_channels=192, metric=14.90 vs. limit=22.5 2024-03-16 01:53:29,923 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=49476.666666666664, ans=0.1 2024-03-16 01:53:35,867 INFO [train_char.py:689] (0/2) Epoch 30, batch 150, loss[loss=0.05731, simple_loss=0.1068, pruned_loss=0.003937, over 24344.00 frames. ], tot_loss[loss=0.06772, simple_loss=0.1206, pruned_loss=0.007401, over 2556244.29 frames. ], batch size: 129, lr: 1.05e-02, grad_scale: 32.0 2024-03-16 01:53:44,258 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.bypass.scale_min, batch_count=49510.0, ans=0.2 2024-03-16 01:54:23,722 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.conv_module1.whiten, num_groups=1, num_channels=384, metric=2.87 vs. limit=15.0 2024-03-16 01:54:46,842 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward2.hidden_balancer.prob, batch_count=49643.333333333336, ans=0.125 2024-03-16 01:54:49,502 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.bypass.skip_rate, batch_count=49676.666666666664, ans=0.07 2024-03-16 01:54:50,366 INFO [train_char.py:689] (0/2) Epoch 30, batch 200, loss[loss=0.06548, simple_loss=0.1161, pruned_loss=0.007405, over 24390.00 frames. ], tot_loss[loss=0.06789, simple_loss=0.1207, pruned_loss=0.007541, over 3049282.59 frames. ], batch size: 152, lr: 1.04e-02, grad_scale: 32.0 2024-03-16 01:55:05,893 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.out_combiner.scale_min, batch_count=49710.0, ans=0.2 2024-03-16 01:55:08,373 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 01:55:20,845 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.211e+01 9.065e+01 1.195e+02 1.639e+02 4.488e+02, threshold=2.390e+02, percent-clipped=6.0 2024-03-16 01:55:49,536 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=49810.0, ans=0.1 2024-03-16 01:55:54,330 INFO [train_char.py:689] (0/2) Epoch 30, batch 250, loss[loss=0.06739, simple_loss=0.1243, pruned_loss=0.005261, over 24369.00 frames. ], tot_loss[loss=0.06815, simple_loss=0.1213, pruned_loss=0.007492, over 3444083.30 frames. ], batch size: 172, lr: 1.04e-02, grad_scale: 32.0 2024-03-16 01:55:54,527 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer1.prob, batch_count=49843.333333333336, ans=0.125 2024-03-16 01:56:37,952 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_skip_rate, batch_count=49943.333333333336, ans=0.0 2024-03-16 01:57:02,899 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.whiten, num_groups=1, num_channels=384, metric=6.93 vs. limit=12.0 2024-03-16 01:57:04,639 INFO [train_char.py:689] (0/2) Epoch 30, batch 300, loss[loss=0.0804, simple_loss=0.1448, pruned_loss=0.008024, over 24268.00 frames. ], tot_loss[loss=0.0686, simple_loss=0.1222, pruned_loss=0.007527, over 3748387.22 frames. ], batch size: 267, lr: 1.04e-02, grad_scale: 32.0 2024-03-16 01:57:14,780 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.scale_min, batch_count=50010.0, ans=0.2 2024-03-16 01:57:18,456 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer1.prob, batch_count=50043.333333333336, ans=0.125 2024-03-16 01:57:30,420 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer1.prob, batch_count=50076.666666666664, ans=0.125 2024-03-16 01:57:33,803 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.289e+01 9.764e+01 1.255e+02 1.703e+02 3.273e+02, threshold=2.510e+02, percent-clipped=8.0 2024-03-16 01:57:34,018 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 01:57:40,819 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.conv_module1.whiten, num_groups=1, num_channels=384, metric=3.10 vs. limit=15.0 2024-03-16 01:57:43,478 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.whiten, num_groups=1, num_channels=512, metric=7.34 vs. limit=12.0 2024-03-16 01:57:54,067 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module2.balancer2.prob, batch_count=50143.333333333336, ans=0.125 2024-03-16 01:57:57,979 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.bypass.skip_rate, batch_count=50143.333333333336, ans=0.07 2024-03-16 01:57:58,361 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=11.55 vs. limit=15.0 2024-03-16 01:58:06,485 INFO [train_char.py:689] (0/2) Epoch 30, batch 350, loss[loss=0.07495, simple_loss=0.1341, pruned_loss=0.007888, over 24112.00 frames. ], tot_loss[loss=0.06874, simple_loss=0.1224, pruned_loss=0.007546, over 3990572.58 frames. ], batch size: 199, lr: 1.04e-02, grad_scale: 32.0 2024-03-16 01:58:17,924 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.balancer2.prob, batch_count=50210.0, ans=0.125 2024-03-16 01:58:28,004 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=50210.0, ans=0.125 2024-03-16 01:58:28,313 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.whiten, num_groups=1, num_channels=384, metric=5.93 vs. limit=12.0 2024-03-16 01:58:49,104 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer1.min_positive, batch_count=50276.666666666664, ans=0.025 2024-03-16 01:58:54,072 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=50276.666666666664, ans=0.125 2024-03-16 01:59:10,123 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_skip_rate, batch_count=50310.0, ans=0.0 2024-03-16 01:59:12,420 INFO [train_char.py:689] (0/2) Epoch 30, batch 400, loss[loss=0.07761, simple_loss=0.1387, pruned_loss=0.008276, over 24198.00 frames. ], tot_loss[loss=0.06939, simple_loss=0.1235, pruned_loss=0.007656, over 4179342.77 frames. ], batch size: 212, lr: 1.04e-02, grad_scale: 32.0 2024-03-16 01:59:18,084 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=13.45 vs. limit=15.0 2024-03-16 01:59:27,683 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module2.balancer1.prob, batch_count=50376.666666666664, ans=0.125 2024-03-16 01:59:42,541 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.655e+01 8.249e+01 9.557e+01 1.227e+02 2.032e+02, threshold=1.911e+02, percent-clipped=0.0 2024-03-16 01:59:48,826 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_skip_rate, batch_count=50443.333333333336, ans=0.0 2024-03-16 01:59:57,482 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=50443.333333333336, ans=0.125 2024-03-16 02:00:02,689 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=7.61 vs. limit=15.0 2024-03-16 02:00:17,277 INFO [train_char.py:689] (0/2) Epoch 30, batch 450, loss[loss=0.07105, simple_loss=0.1289, pruned_loss=0.00659, over 24287.00 frames. ], tot_loss[loss=0.06983, simple_loss=0.1244, pruned_loss=0.00762, over 4326365.82 frames. ], batch size: 180, lr: 1.04e-02, grad_scale: 32.0 2024-03-16 02:00:26,275 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.2.self_attn_weights, loss-sum=0.000e+00 2024-03-16 02:00:28,794 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.self_attn_weights.pos_emb_skip_rate, batch_count=50543.333333333336, ans=0.0 2024-03-16 02:00:36,070 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.attention_skip_rate, batch_count=50543.333333333336, ans=0.0 2024-03-16 02:00:44,190 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=192, metric=9.05 vs. limit=15.0 2024-03-16 02:00:52,786 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.whiten, num_groups=1, num_channels=512, metric=7.68 vs. limit=12.0 2024-03-16 02:01:03,780 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=50610.0, ans=0.125 2024-03-16 02:01:21,331 INFO [train_char.py:689] (0/2) Epoch 30, batch 500, loss[loss=0.07317, simple_loss=0.135, pruned_loss=0.005664, over 24114.00 frames. ], tot_loss[loss=0.07061, simple_loss=0.1259, pruned_loss=0.007642, over 4439574.21 frames. ], batch size: 199, lr: 1.04e-02, grad_scale: 32.0 2024-03-16 02:01:30,589 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-30.pt 2024-03-16 02:02:22,861 INFO [train_char.py:689] (0/2) Epoch 31, batch 0, loss[loss=0.06873, simple_loss=0.122, pruned_loss=0.007732, over 24369.00 frames. ], tot_loss[loss=0.06873, simple_loss=0.122, pruned_loss=0.007732, over 24369.00 frames. ], batch size: 180, lr: 1.02e-02, grad_scale: 32.0 2024-03-16 02:02:22,862 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 02:02:36,556 INFO [train_char.py:721] (0/2) Epoch 31, validation: loss=0.06049, simple_loss=0.1115, pruned_loss=0.004723, over 657665.00 frames. 2024-03-16 02:02:36,557 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 02:02:59,635 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.675e+01 8.591e+01 1.068e+02 1.376e+02 2.315e+02, threshold=2.136e+02, percent-clipped=7.0 2024-03-16 02:03:25,018 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer1.prob, batch_count=50800.0, ans=0.125 2024-03-16 02:03:26,976 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=192, metric=8.53 vs. limit=15.0 2024-03-16 02:03:43,501 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=50833.333333333336, ans=0.125 2024-03-16 02:03:47,578 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=50833.333333333336, ans=0.125 2024-03-16 02:03:51,323 INFO [train_char.py:689] (0/2) Epoch 31, batch 50, loss[loss=0.06386, simple_loss=0.1077, pruned_loss=0.01001, over 23917.00 frames. ], tot_loss[loss=0.06613, simple_loss=0.1188, pruned_loss=0.006721, over 1086530.41 frames. ], batch size: 407, lr: 1.02e-02, grad_scale: 32.0 2024-03-16 02:03:51,692 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.balancer.max_positive, batch_count=50866.666666666664, ans=0.95 2024-03-16 02:03:54,485 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.ff2_skip_rate, batch_count=50866.666666666664, ans=0.0 2024-03-16 02:04:12,003 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 02:04:15,033 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.nonlin_attention.whiten2, num_groups=1, num_channels=512, metric=5.21 vs. limit=15.0 2024-03-16 02:04:27,515 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass_mid.scale_min, batch_count=50933.333333333336, ans=0.2 2024-03-16 02:04:28,744 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=50933.333333333336, ans=0.1 2024-03-16 02:04:33,314 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_whiten.whitening_limit, batch_count=50966.666666666664, ans=15.0 2024-03-16 02:04:57,143 INFO [train_char.py:689] (0/2) Epoch 31, batch 100, loss[loss=0.06231, simple_loss=0.114, pruned_loss=0.005315, over 24397.00 frames. ], tot_loss[loss=0.06539, simple_loss=0.1179, pruned_loss=0.006442, over 1913448.93 frames. ], batch size: 158, lr: 1.02e-02, grad_scale: 32.0 2024-03-16 02:05:10,419 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=4.28 vs. limit=6.0 2024-03-16 02:05:22,595 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=5.29 vs. limit=15.0 2024-03-16 02:05:23,590 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.balancer2.prob, batch_count=51066.666666666664, ans=0.125 2024-03-16 02:05:24,562 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.055e+01 9.150e+01 1.136e+02 1.489e+02 2.776e+02, threshold=2.272e+02, percent-clipped=11.0 2024-03-16 02:05:42,294 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=192, metric=8.71 vs. limit=15.0 2024-03-16 02:05:55,662 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_skip_rate, batch_count=51166.666666666664, ans=0.0 2024-03-16 02:06:07,018 INFO [train_char.py:689] (0/2) Epoch 31, batch 150, loss[loss=0.05514, simple_loss=0.1029, pruned_loss=0.003711, over 24197.00 frames. ], tot_loss[loss=0.06617, simple_loss=0.1185, pruned_loss=0.006919, over 2558293.22 frames. ], batch size: 122, lr: 1.01e-02, grad_scale: 64.0 2024-03-16 02:06:22,180 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer2.prob, batch_count=51233.333333333336, ans=0.125 2024-03-16 02:06:51,994 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward2.hidden_balancer.prob, batch_count=51300.0, ans=0.125 2024-03-16 02:06:58,478 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=51300.0, ans=0.1 2024-03-16 02:07:15,079 INFO [train_char.py:689] (0/2) Epoch 31, batch 200, loss[loss=0.07315, simple_loss=0.1325, pruned_loss=0.006897, over 24281.00 frames. ], tot_loss[loss=0.0666, simple_loss=0.1194, pruned_loss=0.006924, over 3056585.26 frames. ], batch size: 296, lr: 1.01e-02, grad_scale: 64.0 2024-03-16 02:07:15,330 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.attention_skip_rate, batch_count=51366.666666666664, ans=0.0 2024-03-16 02:07:20,349 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=51366.666666666664, ans=0.0 2024-03-16 02:07:28,066 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=51400.0, ans=0.1 2024-03-16 02:07:36,480 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.945e+01 8.396e+01 1.015e+02 1.465e+02 2.987e+02, threshold=2.030e+02, percent-clipped=3.0 2024-03-16 02:07:48,553 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=12.15 vs. limit=15.0 2024-03-16 02:08:06,475 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=11.66 vs. limit=15.0 2024-03-16 02:08:21,661 INFO [train_char.py:689] (0/2) Epoch 31, batch 250, loss[loss=0.05358, simple_loss=0.1001, pruned_loss=0.003512, over 24376.00 frames. ], tot_loss[loss=0.06685, simple_loss=0.1199, pruned_loss=0.006918, over 3449359.18 frames. ], batch size: 129, lr: 1.01e-02, grad_scale: 64.0 2024-03-16 02:08:24,391 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=51533.333333333336, ans=0.1 2024-03-16 02:08:37,268 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_skip_rate, batch_count=51566.666666666664, ans=0.0 2024-03-16 02:08:46,114 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module1.balancer1.prob, batch_count=51600.0, ans=0.125 2024-03-16 02:08:54,909 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer1.prob, batch_count=51600.0, ans=0.125 2024-03-16 02:09:05,780 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=512, metric=3.12 vs. limit=15.0 2024-03-16 02:09:28,415 INFO [train_char.py:689] (0/2) Epoch 31, batch 300, loss[loss=0.05793, simple_loss=0.1074, pruned_loss=0.00423, over 24311.00 frames. ], tot_loss[loss=0.06738, simple_loss=0.1209, pruned_loss=0.00695, over 3750612.37 frames. ], batch size: 146, lr: 1.01e-02, grad_scale: 64.0 2024-03-16 02:09:49,878 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.345e+01 8.460e+01 1.034e+02 1.389e+02 2.588e+02, threshold=2.067e+02, percent-clipped=8.0 2024-03-16 02:10:11,991 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=15.76 vs. limit=15.0 2024-03-16 02:10:14,051 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.ff2_skip_rate, batch_count=51800.0, ans=0.0 2024-03-16 02:10:33,773 INFO [train_char.py:689] (0/2) Epoch 31, batch 350, loss[loss=0.07845, simple_loss=0.1396, pruned_loss=0.008642, over 24106.00 frames. ], tot_loss[loss=0.06723, simple_loss=0.1207, pruned_loss=0.006889, over 3987774.85 frames. ], batch size: 199, lr: 1.01e-02, grad_scale: 64.0 2024-03-16 02:10:39,007 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff3_skip_rate, batch_count=51866.666666666664, ans=0.0 2024-03-16 02:10:41,568 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.nonlin_attention.balancer.max_positive, batch_count=51866.666666666664, ans=0.95 2024-03-16 02:10:41,611 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff2_skip_rate, batch_count=51866.666666666664, ans=0.0 2024-03-16 02:10:59,311 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_skip_rate, batch_count=51933.333333333336, ans=0.0 2024-03-16 02:11:18,654 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.skip_rate, batch_count=51966.666666666664, ans=0.09899494936611666 2024-03-16 02:11:19,884 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module1.balancer1.prob, batch_count=51966.666666666664, ans=0.125 2024-03-16 02:11:20,233 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=12.04 vs. limit=15.0 2024-03-16 02:11:23,814 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=4.71 vs. limit=15.0 2024-03-16 02:11:29,119 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.whiten, num_groups=1, num_channels=384, metric=3.48 vs. limit=12.0 2024-03-16 02:11:38,299 INFO [train_char.py:689] (0/2) Epoch 31, batch 400, loss[loss=0.05941, simple_loss=0.1045, pruned_loss=0.007155, over 24022.00 frames. ], tot_loss[loss=0.06796, simple_loss=0.1217, pruned_loss=0.007111, over 4176593.33 frames. ], batch size: 381, lr: 1.01e-02, grad_scale: 64.0 2024-03-16 02:11:40,226 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=10.09 vs. limit=15.0 2024-03-16 02:12:01,016 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.228e+01 9.105e+01 1.103e+02 1.407e+02 2.463e+02, threshold=2.206e+02, percent-clipped=6.0 2024-03-16 02:12:14,922 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=52100.0, ans=0.1 2024-03-16 02:12:23,542 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.conv.5.prob, batch_count=52133.333333333336, ans=0.125 2024-03-16 02:12:29,872 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 02:12:36,120 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer1.prob, batch_count=52166.666666666664, ans=0.125 2024-03-16 02:12:43,570 INFO [train_char.py:689] (0/2) Epoch 31, batch 450, loss[loss=0.07083, simple_loss=0.1258, pruned_loss=0.00795, over 24314.00 frames. ], tot_loss[loss=0.06938, simple_loss=0.1241, pruned_loss=0.007345, over 4323639.38 frames. ], batch size: 180, lr: 1.01e-02, grad_scale: 64.0 2024-03-16 02:12:52,699 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=52200.0, ans=0.1 2024-03-16 02:13:10,094 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.conv_module1.whiten, num_groups=1, num_channels=384, metric=2.88 vs. limit=15.0 2024-03-16 02:13:12,530 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.conv_module2.whiten, num_groups=1, num_channels=384, metric=3.32 vs. limit=15.0 2024-03-16 02:13:15,858 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.nonlin_attention.balancer.min_positive, batch_count=52266.666666666664, ans=0.05 2024-03-16 02:13:36,235 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_skip_rate, batch_count=52333.333333333336, ans=0.0 2024-03-16 02:13:36,263 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=52333.333333333336, ans=0.1 2024-03-16 02:13:46,082 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.nonlin_attention.balancer.prob, batch_count=52366.666666666664, ans=0.125 2024-03-16 02:13:47,177 INFO [train_char.py:689] (0/2) Epoch 31, batch 500, loss[loss=0.07378, simple_loss=0.1327, pruned_loss=0.007443, over 24070.00 frames. ], tot_loss[loss=0.07054, simple_loss=0.1261, pruned_loss=0.007467, over 4436872.15 frames. ], batch size: 199, lr: 1.01e-02, grad_scale: 64.0 2024-03-16 02:13:51,612 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=52366.666666666664, ans=0.1 2024-03-16 02:13:56,529 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-31.pt 2024-03-16 02:14:42,635 INFO [train_char.py:689] (0/2) Epoch 32, batch 0, loss[loss=0.04734, simple_loss=0.08084, pruned_loss=0.006917, over 22566.00 frames. ], tot_loss[loss=0.04734, simple_loss=0.08084, pruned_loss=0.006917, over 22566.00 frames. ], batch size: 483, lr: 9.89e-03, grad_scale: 64.0 2024-03-16 02:14:42,636 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 02:14:56,363 INFO [train_char.py:721] (0/2) Epoch 32, validation: loss=0.06052, simple_loss=0.1115, pruned_loss=0.004789, over 657665.00 frames. 2024-03-16 02:14:56,364 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 02:14:59,352 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=52390.0, ans=0.1 2024-03-16 02:15:04,951 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_skip_rate, batch_count=52390.0, ans=0.0 2024-03-16 02:15:09,984 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.604e+01 8.417e+01 9.526e+01 1.144e+02 3.547e+02, threshold=1.905e+02, percent-clipped=2.0 2024-03-16 02:15:29,590 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.out_combiner.scale_min, batch_count=52456.666666666664, ans=0.2 2024-03-16 02:15:30,840 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.skip_rate, batch_count=52456.666666666664, ans=0.09899494936611666 2024-03-16 02:15:33,557 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.ff3_skip_rate, batch_count=52456.666666666664, ans=0.0 2024-03-16 02:15:59,306 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.conv_module1.whiten, num_groups=1, num_channels=384, metric=3.19 vs. limit=15.0 2024-03-16 02:16:01,376 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_skip_rate, batch_count=52523.333333333336, ans=0.0 2024-03-16 02:16:06,457 INFO [train_char.py:689] (0/2) Epoch 32, batch 50, loss[loss=0.05731, simple_loss=0.1079, pruned_loss=0.00335, over 24330.00 frames. ], tot_loss[loss=0.06617, simple_loss=0.1177, pruned_loss=0.007342, over 1081787.63 frames. ], batch size: 129, lr: 9.88e-03, grad_scale: 64.0 2024-03-16 02:16:32,750 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.attention_skip_rate, batch_count=52590.0, ans=0.0 2024-03-16 02:16:46,018 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=15.96 vs. limit=15.0 2024-03-16 02:16:50,774 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module1.balancer1.prob, batch_count=52656.666666666664, ans=0.125 2024-03-16 02:16:55,148 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.balancer2.prob, batch_count=52656.666666666664, ans=0.125 2024-03-16 02:16:56,873 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=14.53 vs. limit=15.0 2024-03-16 02:17:04,333 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=12.05 vs. limit=15.0 2024-03-16 02:17:17,642 INFO [train_char.py:689] (0/2) Epoch 32, batch 100, loss[loss=0.05877, simple_loss=0.09957, pruned_loss=0.00899, over 23820.00 frames. ], tot_loss[loss=0.06626, simple_loss=0.1184, pruned_loss=0.007068, over 1908030.10 frames. ], batch size: 439, lr: 9.87e-03, grad_scale: 64.0 2024-03-16 02:17:27,778 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=8.82 vs. limit=15.0 2024-03-16 02:17:30,821 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.622e+01 8.593e+01 1.094e+02 1.492e+02 2.911e+02, threshold=2.188e+02, percent-clipped=11.0 2024-03-16 02:17:43,867 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.out_combiner.scale_min, batch_count=52790.0, ans=0.2 2024-03-16 02:17:47,512 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.bypass.scale_min, batch_count=52790.0, ans=0.2 2024-03-16 02:18:00,231 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.balancer2.prob, batch_count=52823.333333333336, ans=0.125 2024-03-16 02:18:10,350 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=52823.333333333336, ans=0.125 2024-03-16 02:18:25,681 INFO [train_char.py:689] (0/2) Epoch 32, batch 150, loss[loss=0.07647, simple_loss=0.1371, pruned_loss=0.007907, over 24090.00 frames. ], tot_loss[loss=0.06645, simple_loss=0.119, pruned_loss=0.006963, over 2555415.70 frames. ], batch size: 199, lr: 9.85e-03, grad_scale: 64.0 2024-03-16 02:18:31,168 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 02:18:38,028 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.84 vs. limit=6.0 2024-03-16 02:18:47,779 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.scale_min, batch_count=52923.333333333336, ans=0.2 2024-03-16 02:18:48,315 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=384, metric=7.59 vs. limit=15.0 2024-03-16 02:19:03,184 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_skip_rate, batch_count=52990.0, ans=0.0 2024-03-16 02:19:12,276 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.self_attn_weights.whiten_keys, num_groups=8, num_channels=256, metric=5.51 vs. limit=6.0 2024-03-16 02:19:21,389 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.04 vs. limit=6.0 2024-03-16 02:19:25,844 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.skip_rate, batch_count=53023.333333333336, ans=0.04949747468305833 2024-03-16 02:19:29,507 INFO [train_char.py:689] (0/2) Epoch 32, batch 200, loss[loss=0.06618, simple_loss=0.1117, pruned_loss=0.01032, over 24145.00 frames. ], tot_loss[loss=0.06658, simple_loss=0.1192, pruned_loss=0.006976, over 3055803.35 frames. ], batch size: 362, lr: 9.84e-03, grad_scale: 64.0 2024-03-16 02:19:38,090 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward2.out_whiten.whitening_limit, batch_count=53056.666666666664, ans=15.0 2024-03-16 02:19:42,180 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.self_attn1.whiten, num_groups=1, num_channels=192, metric=17.88 vs. limit=22.5 2024-03-16 02:19:46,244 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.329e+01 8.565e+01 1.072e+02 1.480e+02 2.596e+02, threshold=2.144e+02, percent-clipped=2.0 2024-03-16 02:19:54,272 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.bypass.scale_min, batch_count=53090.0, ans=0.2 2024-03-16 02:19:55,474 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=53090.0, ans=0.125 2024-03-16 02:19:55,542 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.self_attn_weights.pos_emb_skip_rate, batch_count=53090.0, ans=0.0 2024-03-16 02:20:20,382 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.nonlin_attention.balancer.max_positive, batch_count=53156.666666666664, ans=0.95 2024-03-16 02:20:20,457 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=53156.666666666664, ans=0.0 2024-03-16 02:20:25,464 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass_mid.scale_min, batch_count=53190.0, ans=0.2 2024-03-16 02:20:25,490 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff2_skip_rate, batch_count=53190.0, ans=0.0 2024-03-16 02:20:36,584 INFO [train_char.py:689] (0/2) Epoch 32, batch 250, loss[loss=0.08616, simple_loss=0.1504, pruned_loss=0.01095, over 24082.00 frames. ], tot_loss[loss=0.06722, simple_loss=0.1203, pruned_loss=0.007064, over 3446620.10 frames. ], batch size: 223, lr: 9.83e-03, grad_scale: 64.0 2024-03-16 02:20:39,257 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.attention_skip_rate, batch_count=53223.333333333336, ans=0.0 2024-03-16 02:20:53,332 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=13.28 vs. limit=15.0 2024-03-16 02:21:12,317 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.whiten, num_groups=1, num_channels=384, metric=6.27 vs. limit=12.0 2024-03-16 02:21:20,963 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/checkpoint-16000.pt 2024-03-16 02:21:29,539 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward3.hidden_balancer.prob, batch_count=53323.333333333336, ans=0.125 2024-03-16 02:21:41,419 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=14.58 vs. limit=15.0 2024-03-16 02:21:46,750 INFO [train_char.py:689] (0/2) Epoch 32, batch 300, loss[loss=0.06291, simple_loss=0.1082, pruned_loss=0.008826, over 23995.00 frames. ], tot_loss[loss=0.06707, simple_loss=0.1204, pruned_loss=0.00687, over 3749092.98 frames. ], batch size: 381, lr: 9.82e-03, grad_scale: 64.0 2024-03-16 02:21:49,571 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=53390.0, ans=0.1 2024-03-16 02:21:59,463 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.117e+01 8.738e+01 1.014e+02 1.367e+02 3.046e+02, threshold=2.028e+02, percent-clipped=3.0 2024-03-16 02:22:17,129 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=12.04 vs. limit=15.0 2024-03-16 02:22:33,027 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer1.prob, batch_count=53490.0, ans=0.125 2024-03-16 02:22:38,224 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.self_attn_weights.pos_emb_skip_rate, batch_count=53490.0, ans=0.0 2024-03-16 02:22:47,076 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=53523.333333333336, ans=0.125 2024-03-16 02:22:48,058 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=53523.333333333336, ans=0.1 2024-03-16 02:22:52,638 INFO [train_char.py:689] (0/2) Epoch 32, batch 350, loss[loss=0.06159, simple_loss=0.1155, pruned_loss=0.003855, over 24399.00 frames. ], tot_loss[loss=0.06773, simple_loss=0.1217, pruned_loss=0.006894, over 3990954.89 frames. ], batch size: 158, lr: 9.80e-03, grad_scale: 64.0 2024-03-16 02:23:30,671 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module2.balancer2.prob, batch_count=53623.333333333336, ans=0.125 2024-03-16 02:23:37,314 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=9.84 vs. limit=15.0 2024-03-16 02:23:57,648 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer1.min_positive, batch_count=53690.0, ans=0.025 2024-03-16 02:23:59,952 INFO [train_char.py:689] (0/2) Epoch 32, batch 400, loss[loss=0.06132, simple_loss=0.1067, pruned_loss=0.007993, over 23952.00 frames. ], tot_loss[loss=0.06856, simple_loss=0.1231, pruned_loss=0.007027, over 4178702.22 frames. ], batch size: 381, lr: 9.79e-03, grad_scale: 64.0 2024-03-16 02:24:01,921 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=384, metric=2.56 vs. limit=15.0 2024-03-16 02:24:12,623 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.176e+01 8.405e+01 1.034e+02 1.494e+02 2.729e+02, threshold=2.068e+02, percent-clipped=9.0 2024-03-16 02:24:54,335 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass.scale_min, batch_count=53856.666666666664, ans=0.2 2024-03-16 02:25:04,154 INFO [train_char.py:689] (0/2) Epoch 32, batch 450, loss[loss=0.06489, simple_loss=0.1192, pruned_loss=0.005311, over 24360.00 frames. ], tot_loss[loss=0.06932, simple_loss=0.1245, pruned_loss=0.007046, over 4325807.53 frames. ], batch size: 172, lr: 9.78e-03, grad_scale: 64.0 2024-03-16 02:25:14,233 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=53890.0, ans=0.125 2024-03-16 02:25:48,474 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module1.balancer2.prob, batch_count=53990.0, ans=0.125 2024-03-16 02:26:08,678 INFO [train_char.py:689] (0/2) Epoch 32, batch 500, loss[loss=0.07203, simple_loss=0.1295, pruned_loss=0.007302, over 24133.00 frames. ], tot_loss[loss=0.06997, simple_loss=0.1256, pruned_loss=0.007144, over 4438610.77 frames. ], batch size: 279, lr: 9.77e-03, grad_scale: 64.0 2024-03-16 02:26:10,233 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.balancer2.prob, batch_count=54056.666666666664, ans=0.125 2024-03-16 02:26:12,724 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 02:26:17,447 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-32.pt 2024-03-16 02:27:07,588 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=54080.0, ans=0.1 2024-03-16 02:27:08,654 INFO [train_char.py:689] (0/2) Epoch 33, batch 0, loss[loss=0.05908, simple_loss=0.09993, pruned_loss=0.009115, over 23955.00 frames. ], tot_loss[loss=0.05908, simple_loss=0.09993, pruned_loss=0.009115, over 23955.00 frames. ], batch size: 407, lr: 9.61e-03, grad_scale: 64.0 2024-03-16 02:27:08,655 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 02:27:22,228 INFO [train_char.py:721] (0/2) Epoch 33, validation: loss=0.05978, simple_loss=0.1102, pruned_loss=0.004664, over 657665.00 frames. 2024-03-16 02:27:22,229 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 02:27:26,295 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.862e+01 8.567e+01 1.007e+02 1.294e+02 2.240e+02, threshold=2.013e+02, percent-clipped=1.0 2024-03-16 02:27:30,733 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.bypass.skip_rate, batch_count=54080.0, ans=0.09899494936611666 2024-03-16 02:28:06,023 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.bypass.scale_min, batch_count=54180.0, ans=0.2 2024-03-16 02:28:09,502 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=10.54 vs. limit=15.0 2024-03-16 02:28:19,884 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer2.min_abs, batch_count=54213.333333333336, ans=0.5 2024-03-16 02:28:33,263 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass.scale_min, batch_count=54213.333333333336, ans=0.2 2024-03-16 02:28:35,657 INFO [train_char.py:689] (0/2) Epoch 33, batch 50, loss[loss=0.0703, simple_loss=0.1275, pruned_loss=0.006545, over 24055.00 frames. ], tot_loss[loss=0.06589, simple_loss=0.118, pruned_loss=0.006894, over 1083385.59 frames. ], batch size: 199, lr: 9.60e-03, grad_scale: 64.0 2024-03-16 02:28:41,255 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=54246.666666666664, ans=0.1 2024-03-16 02:28:42,931 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=18.16 vs. limit=15.0 2024-03-16 02:28:44,035 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff2_skip_rate, batch_count=54246.666666666664, ans=0.0 2024-03-16 02:28:58,799 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward3.hidden_balancer.prob, batch_count=54280.0, ans=0.125 2024-03-16 02:29:06,430 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.whiten, num_groups=1, num_channels=256, metric=3.43 vs. limit=12.0 2024-03-16 02:29:40,746 INFO [train_char.py:689] (0/2) Epoch 33, batch 100, loss[loss=0.05578, simple_loss=0.1052, pruned_loss=0.003203, over 24279.00 frames. ], tot_loss[loss=0.06587, simple_loss=0.1182, pruned_loss=0.006756, over 1911126.26 frames. ], batch size: 134, lr: 9.59e-03, grad_scale: 64.0 2024-03-16 02:29:44,582 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.212e+01 8.692e+01 1.036e+02 1.314e+02 2.598e+02, threshold=2.072e+02, percent-clipped=6.0 2024-03-16 02:29:49,989 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.attention_skip_rate, batch_count=54413.333333333336, ans=0.0 2024-03-16 02:29:50,502 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=8.25 vs. limit=15.0 2024-03-16 02:29:55,054 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.nonlin_attention.balancer.prob, batch_count=54446.666666666664, ans=0.125 2024-03-16 02:30:13,440 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=54480.0, ans=0.1 2024-03-16 02:30:19,635 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=54480.0, ans=0.125 2024-03-16 02:30:26,274 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.scale_min, batch_count=54513.333333333336, ans=0.2 2024-03-16 02:30:49,288 INFO [train_char.py:689] (0/2) Epoch 33, batch 150, loss[loss=0.06828, simple_loss=0.1244, pruned_loss=0.006103, over 24353.00 frames. ], tot_loss[loss=0.06659, simple_loss=0.1199, pruned_loss=0.006626, over 2555681.45 frames. ], batch size: 180, lr: 9.58e-03, grad_scale: 64.0 2024-03-16 02:30:57,292 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=54580.0, ans=0.0 2024-03-16 02:31:09,199 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.ff3_skip_rate, batch_count=54613.333333333336, ans=0.0 2024-03-16 02:31:13,025 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_skip_rate, batch_count=54613.333333333336, ans=0.0 2024-03-16 02:31:41,472 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.nonlin_attention.balancer.prob, batch_count=54680.0, ans=0.125 2024-03-16 02:31:45,481 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_skip_rate, batch_count=54713.333333333336, ans=0.0 2024-03-16 02:31:47,940 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff3_skip_rate, batch_count=54713.333333333336, ans=0.0 2024-03-16 02:31:55,565 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer1.min_positive, batch_count=54713.333333333336, ans=0.025 2024-03-16 02:31:58,021 INFO [train_char.py:689] (0/2) Epoch 33, batch 200, loss[loss=0.06215, simple_loss=0.1143, pruned_loss=0.004978, over 23788.00 frames. ], tot_loss[loss=0.0662, simple_loss=0.1194, pruned_loss=0.006523, over 3061236.52 frames. ], batch size: 107, lr: 9.56e-03, grad_scale: 64.0 2024-03-16 02:32:01,779 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.254e+01 8.119e+01 1.013e+02 1.326e+02 2.731e+02, threshold=2.027e+02, percent-clipped=6.0 2024-03-16 02:32:04,661 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass.skip_rate, batch_count=54746.666666666664, ans=0.07 2024-03-16 02:32:38,048 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff3_skip_rate, batch_count=54846.666666666664, ans=0.0 2024-03-16 02:32:43,185 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=54846.666666666664, ans=0.125 2024-03-16 02:32:57,138 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module2.balancer2.min_positive, batch_count=54880.0, ans=0.05 2024-03-16 02:33:01,901 INFO [train_char.py:689] (0/2) Epoch 33, batch 250, loss[loss=0.07367, simple_loss=0.1361, pruned_loss=0.005634, over 24095.00 frames. ], tot_loss[loss=0.06592, simple_loss=0.1191, pruned_loss=0.006392, over 3446183.17 frames. ], batch size: 199, lr: 9.55e-03, grad_scale: 64.0 2024-03-16 02:33:15,920 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=54946.666666666664, ans=0.125 2024-03-16 02:33:24,635 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=54946.666666666664, ans=0.0 2024-03-16 02:33:33,597 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=54980.0, ans=0.1 2024-03-16 02:33:53,548 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_skip_rate, batch_count=55013.333333333336, ans=0.0 2024-03-16 02:34:11,312 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_skip_rate, batch_count=55080.0, ans=0.0 2024-03-16 02:34:12,303 INFO [train_char.py:689] (0/2) Epoch 33, batch 300, loss[loss=0.0552, simple_loss=0.1031, pruned_loss=0.003649, over 24316.00 frames. ], tot_loss[loss=0.0661, simple_loss=0.1193, pruned_loss=0.006453, over 3753685.69 frames. ], batch size: 146, lr: 9.54e-03, grad_scale: 64.0 2024-03-16 02:34:16,095 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.114e+01 8.552e+01 1.120e+02 1.349e+02 2.763e+02, threshold=2.240e+02, percent-clipped=5.0 2024-03-16 02:34:22,710 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer2.prob, batch_count=55080.0, ans=0.125 2024-03-16 02:34:36,433 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=55146.666666666664, ans=0.1 2024-03-16 02:34:46,966 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer2.prob, batch_count=55146.666666666664, ans=0.125 2024-03-16 02:34:59,722 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer2.prob, batch_count=55180.0, ans=0.125 2024-03-16 02:35:03,869 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.self_attn_weights.whiten_keys, num_groups=8, num_channels=256, metric=5.45 vs. limit=6.0 2024-03-16 02:35:13,994 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=4.14 vs. limit=15.0 2024-03-16 02:35:15,976 INFO [train_char.py:689] (0/2) Epoch 33, batch 350, loss[loss=0.06251, simple_loss=0.1126, pruned_loss=0.00621, over 24356.00 frames. ], tot_loss[loss=0.06638, simple_loss=0.1197, pruned_loss=0.006505, over 3991114.83 frames. ], batch size: 158, lr: 9.53e-03, grad_scale: 64.0 2024-03-16 02:35:31,753 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_skip_rate, batch_count=55280.0, ans=0.0 2024-03-16 02:36:02,277 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.conv.2.prob, batch_count=55346.666666666664, ans=0.125 2024-03-16 02:36:23,585 INFO [train_char.py:689] (0/2) Epoch 33, batch 400, loss[loss=0.07216, simple_loss=0.1282, pruned_loss=0.008058, over 24137.00 frames. ], tot_loss[loss=0.06668, simple_loss=0.1202, pruned_loss=0.00658, over 4179516.00 frames. ], batch size: 188, lr: 9.52e-03, grad_scale: 64.0 2024-03-16 02:36:27,446 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.306e+01 8.897e+01 1.076e+02 1.362e+02 2.416e+02, threshold=2.151e+02, percent-clipped=4.0 2024-03-16 02:36:29,052 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.attention_skip_rate, batch_count=55413.333333333336, ans=0.0 2024-03-16 02:36:49,156 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module2.balancer2.prob, batch_count=55480.0, ans=0.125 2024-03-16 02:36:57,040 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer2.prob, batch_count=55480.0, ans=0.125 2024-03-16 02:37:20,727 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 02:37:28,075 INFO [train_char.py:689] (0/2) Epoch 33, batch 450, loss[loss=0.07319, simple_loss=0.1291, pruned_loss=0.008621, over 24145.00 frames. ], tot_loss[loss=0.06789, simple_loss=0.1223, pruned_loss=0.006759, over 4325813.33 frames. ], batch size: 279, lr: 9.50e-03, grad_scale: 64.0 2024-03-16 02:37:29,673 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.balancer2.prob, batch_count=55580.0, ans=0.125 2024-03-16 02:37:33,479 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer2.prob, batch_count=55580.0, ans=0.125 2024-03-16 02:37:40,706 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer1.prob, batch_count=55613.333333333336, ans=0.125 2024-03-16 02:38:16,113 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.skip_rate, batch_count=55680.0, ans=0.07 2024-03-16 02:38:31,152 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer2.min_positive, batch_count=55746.666666666664, ans=0.05 2024-03-16 02:38:32,109 INFO [train_char.py:689] (0/2) Epoch 33, batch 500, loss[loss=0.07189, simple_loss=0.133, pruned_loss=0.005411, over 24142.00 frames. ], tot_loss[loss=0.06862, simple_loss=0.1238, pruned_loss=0.006727, over 4438771.84 frames. ], batch size: 251, lr: 9.49e-03, grad_scale: 64.0 2024-03-16 02:38:35,966 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.980e+01 8.635e+01 9.935e+01 1.233e+02 2.467e+02, threshold=1.987e+02, percent-clipped=3.0 2024-03-16 02:38:41,011 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-33.pt 2024-03-16 02:39:33,648 INFO [train_char.py:689] (0/2) Epoch 34, batch 0, loss[loss=0.06501, simple_loss=0.116, pruned_loss=0.007, over 24370.00 frames. ], tot_loss[loss=0.06501, simple_loss=0.116, pruned_loss=0.007, over 24370.00 frames. ], batch size: 172, lr: 9.35e-03, grad_scale: 64.0 2024-03-16 02:39:33,649 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 02:39:46,445 INFO [zipformer.py:1858] (0/2) name=encoder.encoders.3.encoder.layers.2.self_attn_weights, attn_weights_entropy = tensor([2.5710, 2.0564, 2.6220, 2.4191, 2.3460, 1.8604, 2.0802, 2.6137], device='cuda:0') 2024-03-16 02:39:52,700 INFO [train_char.py:721] (0/2) Epoch 34, validation: loss=0.05905, simple_loss=0.1091, pruned_loss=0.004522, over 657665.00 frames. 2024-03-16 02:39:52,701 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 02:39:56,388 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.conv_module1.whiten, num_groups=1, num_channels=192, metric=7.23 vs. limit=15.0 2024-03-16 02:39:57,030 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_skip_rate, batch_count=55770.0, ans=0.0 2024-03-16 02:40:16,755 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass_mid.scale_min, batch_count=55803.333333333336, ans=0.2 2024-03-16 02:40:26,565 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=384, metric=5.46 vs. limit=15.0 2024-03-16 02:40:46,298 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.ff2_skip_rate, batch_count=55870.0, ans=0.0 2024-03-16 02:41:02,239 INFO [train_char.py:689] (0/2) Epoch 34, batch 50, loss[loss=0.05888, simple_loss=0.1046, pruned_loss=0.006601, over 24136.00 frames. ], tot_loss[loss=0.06373, simple_loss=0.1154, pruned_loss=0.006033, over 1089461.20 frames. ], batch size: 362, lr: 9.33e-03, grad_scale: 64.0 2024-03-16 02:41:11,830 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.balancer2.prob, batch_count=55936.666666666664, ans=0.125 2024-03-16 02:41:29,937 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=56003.333333333336, ans=0.1 2024-03-16 02:41:44,513 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=56003.333333333336, ans=0.1 2024-03-16 02:41:45,782 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer1.prob, batch_count=56003.333333333336, ans=0.125 2024-03-16 02:42:00,104 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_skip_rate, batch_count=56070.0, ans=0.0 2024-03-16 02:42:08,580 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.955e+01 8.419e+01 1.063e+02 1.403e+02 2.842e+02, threshold=2.126e+02, percent-clipped=6.0 2024-03-16 02:42:13,640 INFO [train_char.py:689] (0/2) Epoch 34, batch 100, loss[loss=0.05913, simple_loss=0.1078, pruned_loss=0.005248, over 24198.00 frames. ], tot_loss[loss=0.06463, simple_loss=0.1166, pruned_loss=0.006314, over 1912848.68 frames. ], batch size: 344, lr: 9.32e-03, grad_scale: 64.0 2024-03-16 02:42:14,333 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=384, metric=2.53 vs. limit=15.0 2024-03-16 02:42:20,455 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_skip_rate, batch_count=56103.333333333336, ans=0.0 2024-03-16 02:42:35,239 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.bypass.scale_min, batch_count=56136.666666666664, ans=0.2 2024-03-16 02:42:44,221 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=56170.0, ans=0.1 2024-03-16 02:43:00,764 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.bypass_mid.scale_min, batch_count=56203.333333333336, ans=0.2 2024-03-16 02:43:14,860 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer1.prob, batch_count=56236.666666666664, ans=0.125 2024-03-16 02:43:16,495 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=384, metric=20.65 vs. limit=22.5 2024-03-16 02:43:22,199 INFO [train_char.py:689] (0/2) Epoch 34, batch 150, loss[loss=0.07046, simple_loss=0.1249, pruned_loss=0.008002, over 24093.00 frames. ], tot_loss[loss=0.06475, simple_loss=0.1164, pruned_loss=0.006561, over 2553111.41 frames. ], batch size: 279, lr: 9.31e-03, grad_scale: 64.0 2024-03-16 02:43:54,476 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 02:44:09,514 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.2.self_attn_weights, loss-sum=0.000e+00 2024-03-16 02:44:17,204 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module1.balancer1.prob, batch_count=56403.333333333336, ans=0.125 2024-03-16 02:44:20,864 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.260e+01 8.335e+01 1.032e+02 1.458e+02 2.258e+02, threshold=2.065e+02, percent-clipped=2.0 2024-03-16 02:44:26,203 INFO [train_char.py:689] (0/2) Epoch 34, batch 200, loss[loss=0.06066, simple_loss=0.1066, pruned_loss=0.007374, over 24020.00 frames. ], tot_loss[loss=0.06494, simple_loss=0.1171, pruned_loss=0.006404, over 3054261.87 frames. ], batch size: 381, lr: 9.30e-03, grad_scale: 64.0 2024-03-16 02:44:34,063 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module2.balancer1.prob, batch_count=56436.666666666664, ans=0.125 2024-03-16 02:44:34,154 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.balancer1.prob, batch_count=56436.666666666664, ans=0.125 2024-03-16 02:44:37,807 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=56470.0, ans=0.1 2024-03-16 02:44:44,313 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.hidden_balancer.prob, batch_count=56470.0, ans=0.125 2024-03-16 02:45:06,318 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.attention_skip_rate, batch_count=56503.333333333336, ans=0.0 2024-03-16 02:45:11,459 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_skip_rate, batch_count=56536.666666666664, ans=0.0 2024-03-16 02:45:26,787 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=56570.0, ans=0.125 2024-03-16 02:45:28,055 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.attention_skip_rate, batch_count=56570.0, ans=0.0 2024-03-16 02:45:33,111 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff2_skip_rate, batch_count=56603.333333333336, ans=0.0 2024-03-16 02:45:33,951 INFO [train_char.py:689] (0/2) Epoch 34, batch 250, loss[loss=0.05597, simple_loss=0.1028, pruned_loss=0.004555, over 24369.00 frames. ], tot_loss[loss=0.06467, simple_loss=0.1168, pruned_loss=0.006284, over 3442987.77 frames. ], batch size: 152, lr: 9.29e-03, grad_scale: 32.0 2024-03-16 02:45:40,405 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.hidden_balancer.prob, batch_count=56603.333333333336, ans=0.125 2024-03-16 02:45:42,789 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass.skip_rate, batch_count=56603.333333333336, ans=0.04949747468305833 2024-03-16 02:45:59,432 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module2.balancer2.prob, batch_count=56636.666666666664, ans=0.125 2024-03-16 02:46:22,661 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=56703.333333333336, ans=0.1 2024-03-16 02:46:23,939 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer2.prob, batch_count=56703.333333333336, ans=0.125 2024-03-16 02:46:33,036 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.ff3_skip_rate, batch_count=56736.666666666664, ans=0.0 2024-03-16 02:46:37,815 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.626e+01 8.444e+01 1.114e+02 1.603e+02 3.194e+02, threshold=2.227e+02, percent-clipped=9.0 2024-03-16 02:46:38,185 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module1.balancer1.prob, batch_count=56736.666666666664, ans=0.125 2024-03-16 02:46:40,722 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.balancer1.prob, batch_count=56770.0, ans=0.125 2024-03-16 02:46:41,678 INFO [train_char.py:689] (0/2) Epoch 34, batch 300, loss[loss=0.07584, simple_loss=0.1374, pruned_loss=0.007123, over 24121.00 frames. ], tot_loss[loss=0.0649, simple_loss=0.1172, pruned_loss=0.006284, over 3749186.02 frames. ], batch size: 199, lr: 9.28e-03, grad_scale: 32.0 2024-03-16 02:47:18,828 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=56870.0, ans=0.1 2024-03-16 02:47:47,916 INFO [train_char.py:689] (0/2) Epoch 34, batch 350, loss[loss=0.07171, simple_loss=0.132, pruned_loss=0.005711, over 24087.00 frames. ], tot_loss[loss=0.06557, simple_loss=0.1185, pruned_loss=0.006332, over 3989613.63 frames. ], batch size: 199, lr: 9.27e-03, grad_scale: 32.0 2024-03-16 02:47:52,987 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=56936.666666666664, ans=0.1 2024-03-16 02:48:01,410 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass_mid.scale_min, batch_count=56970.0, ans=0.2 2024-03-16 02:48:44,694 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=5.47 vs. limit=15.0 2024-03-16 02:48:48,653 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=512, metric=5.13 vs. limit=15.0 2024-03-16 02:48:49,082 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.754e+01 8.377e+01 1.088e+02 1.366e+02 2.811e+02, threshold=2.177e+02, percent-clipped=2.0 2024-03-16 02:48:49,808 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=512, metric=3.06 vs. limit=15.0 2024-03-16 02:48:53,045 INFO [train_char.py:689] (0/2) Epoch 34, batch 400, loss[loss=0.08046, simple_loss=0.1401, pruned_loss=0.01043, over 24109.00 frames. ], tot_loss[loss=0.0662, simple_loss=0.1197, pruned_loss=0.006372, over 4178749.97 frames. ], batch size: 251, lr: 9.25e-03, grad_scale: 32.0 2024-03-16 02:49:06,559 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_skip_rate, batch_count=57136.666666666664, ans=0.0 2024-03-16 02:49:08,197 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=9.62 vs. limit=15.0 2024-03-16 02:49:11,278 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_skip_rate, batch_count=57136.666666666664, ans=0.0 2024-03-16 02:49:18,712 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=57170.0, ans=0.1 2024-03-16 02:49:30,256 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module1.balancer1.prob, batch_count=57170.0, ans=0.125 2024-03-16 02:49:33,975 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=57203.333333333336, ans=0.0 2024-03-16 02:49:46,784 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass.scale_min, batch_count=57236.666666666664, ans=0.2 2024-03-16 02:49:49,150 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=57236.666666666664, ans=0.0 2024-03-16 02:49:51,549 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer1.min_positive, batch_count=57236.666666666664, ans=0.025 2024-03-16 02:49:58,753 INFO [train_char.py:689] (0/2) Epoch 34, batch 450, loss[loss=0.07577, simple_loss=0.1365, pruned_loss=0.007502, over 24159.00 frames. ], tot_loss[loss=0.06721, simple_loss=0.1215, pruned_loss=0.006451, over 4323851.07 frames. ], batch size: 251, lr: 9.24e-03, grad_scale: 16.0 2024-03-16 02:50:03,968 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.bypass.scale_min, batch_count=57270.0, ans=0.2 2024-03-16 02:50:28,892 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer2.prob, batch_count=57336.666666666664, ans=0.125 2024-03-16 02:50:38,126 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=8.86 vs. limit=15.0 2024-03-16 02:50:42,705 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=57370.0, ans=0.125 2024-03-16 02:51:01,376 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.622e+01 8.077e+01 9.168e+01 1.186e+02 1.930e+02, threshold=1.834e+02, percent-clipped=1.0 2024-03-16 02:51:03,972 INFO [train_char.py:689] (0/2) Epoch 34, batch 500, loss[loss=0.07605, simple_loss=0.1374, pruned_loss=0.007359, over 24077.00 frames. ], tot_loss[loss=0.06814, simple_loss=0.1232, pruned_loss=0.006533, over 4436999.39 frames. ], batch size: 199, lr: 9.23e-03, grad_scale: 16.0 2024-03-16 02:51:05,549 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module1.balancer1.prob, batch_count=57436.666666666664, ans=0.125 2024-03-16 02:51:13,020 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-34.pt 2024-03-16 02:52:05,660 INFO [train_char.py:689] (0/2) Epoch 35, batch 0, loss[loss=0.06648, simple_loss=0.1222, pruned_loss=0.005367, over 24120.00 frames. ], tot_loss[loss=0.06648, simple_loss=0.1222, pruned_loss=0.005367, over 24120.00 frames. ], batch size: 188, lr: 9.10e-03, grad_scale: 32.0 2024-03-16 02:52:05,661 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 02:52:19,447 INFO [train_char.py:721] (0/2) Epoch 35, validation: loss=0.05958, simple_loss=0.11, pruned_loss=0.004554, over 657665.00 frames. 2024-03-16 02:52:19,448 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 02:53:04,050 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module1.balancer1.min_positive, batch_count=57560.0, ans=0.025 2024-03-16 02:53:09,523 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=57560.0, ans=0.1 2024-03-16 02:53:37,224 INFO [train_char.py:689] (0/2) Epoch 35, batch 50, loss[loss=0.07595, simple_loss=0.1351, pruned_loss=0.008397, over 24106.00 frames. ], tot_loss[loss=0.06407, simple_loss=0.1162, pruned_loss=0.005951, over 1083158.62 frames. ], batch size: 199, lr: 9.08e-03, grad_scale: 32.0 2024-03-16 02:54:00,609 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.hidden_balancer.prob, batch_count=57660.0, ans=0.125 2024-03-16 02:54:36,878 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.719e+01 8.040e+01 1.010e+02 1.352e+02 2.642e+02, threshold=2.021e+02, percent-clipped=9.0 2024-03-16 02:54:37,536 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=5.01 vs. limit=15.0 2024-03-16 02:54:44,499 INFO [scaling.py:1023] (0/2) Whitening: name=encoder_embed.out_whiten, num_groups=1, num_channels=192, metric=7.13 vs. limit=8.0 2024-03-16 02:54:48,503 INFO [train_char.py:689] (0/2) Epoch 35, batch 100, loss[loss=0.06699, simple_loss=0.1222, pruned_loss=0.005876, over 24403.00 frames. ], tot_loss[loss=0.06451, simple_loss=0.117, pruned_loss=0.006002, over 1904971.83 frames. ], batch size: 158, lr: 9.07e-03, grad_scale: 32.0 2024-03-16 02:55:00,050 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.attention_skip_rate, batch_count=57826.666666666664, ans=0.0 2024-03-16 02:55:00,753 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.whiten, num_groups=1, num_channels=192, metric=4.39 vs. limit=12.0 2024-03-16 02:55:52,840 INFO [train_char.py:689] (0/2) Epoch 35, batch 150, loss[loss=0.06148, simple_loss=0.1109, pruned_loss=0.006021, over 24180.00 frames. ], tot_loss[loss=0.06318, simple_loss=0.1148, pruned_loss=0.005762, over 2554087.26 frames. ], batch size: 344, lr: 9.06e-03, grad_scale: 32.0 2024-03-16 02:56:00,868 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass_mid.scale_min, batch_count=57960.0, ans=0.2 2024-03-16 02:56:16,121 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.nonlin_attention.balancer.prob, batch_count=57993.333333333336, ans=0.125 2024-03-16 02:56:28,995 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module2.balancer1.prob, batch_count=58026.666666666664, ans=0.125 2024-03-16 02:56:29,042 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=58026.666666666664, ans=0.1 2024-03-16 02:56:32,869 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.nonlin_attention.balancer.prob, batch_count=58026.666666666664, ans=0.125 2024-03-16 02:56:41,753 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer2.prob, batch_count=58060.0, ans=0.125 2024-03-16 02:56:41,792 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.bypass_mid.scale_min, batch_count=58060.0, ans=0.2 2024-03-16 02:56:50,352 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.885e+01 8.780e+01 1.179e+02 1.482e+02 2.837e+02, threshold=2.359e+02, percent-clipped=8.0 2024-03-16 02:57:01,866 INFO [train_char.py:689] (0/2) Epoch 35, batch 200, loss[loss=0.07989, simple_loss=0.1456, pruned_loss=0.007076, over 24144.00 frames. ], tot_loss[loss=0.06318, simple_loss=0.115, pruned_loss=0.005674, over 3057216.54 frames. ], batch size: 251, lr: 9.05e-03, grad_scale: 32.0 2024-03-16 02:57:07,145 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer2.prob, batch_count=58126.666666666664, ans=0.125 2024-03-16 02:57:13,397 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.self_attn_weights.pos_emb_skip_rate, batch_count=58160.0, ans=0.0 2024-03-16 02:57:13,437 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.bypass.skip_rate, batch_count=58160.0, ans=0.09899494936611666 2024-03-16 02:57:27,461 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.self_attn_weights.pos_emb_skip_rate, batch_count=58193.333333333336, ans=0.0 2024-03-16 02:58:08,629 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass.skip_rate, batch_count=58293.333333333336, ans=0.04949747468305833 2024-03-16 02:58:09,633 INFO [train_char.py:689] (0/2) Epoch 35, batch 250, loss[loss=0.05554, simple_loss=0.1042, pruned_loss=0.003436, over 24282.00 frames. ], tot_loss[loss=0.06332, simple_loss=0.1151, pruned_loss=0.005776, over 3449133.91 frames. ], batch size: 140, lr: 9.04e-03, grad_scale: 32.0 2024-03-16 02:58:24,662 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=192, metric=9.05 vs. limit=15.0 2024-03-16 02:58:41,832 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer1.prob, batch_count=58360.0, ans=0.125 2024-03-16 02:58:43,137 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.attention_skip_rate, batch_count=58360.0, ans=0.0 2024-03-16 02:58:49,306 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=58393.333333333336, ans=0.125 2024-03-16 02:58:57,476 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=13.55 vs. limit=15.0 2024-03-16 02:59:04,959 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.348e+01 8.615e+01 1.130e+02 1.487e+02 2.732e+02, threshold=2.259e+02, percent-clipped=2.0 2024-03-16 02:59:07,946 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=58426.666666666664, ans=0.1 2024-03-16 02:59:16,495 INFO [train_char.py:689] (0/2) Epoch 35, batch 300, loss[loss=0.06132, simple_loss=0.1146, pruned_loss=0.004011, over 24383.00 frames. ], tot_loss[loss=0.06367, simple_loss=0.1158, pruned_loss=0.005777, over 3756817.44 frames. ], batch size: 152, lr: 9.03e-03, grad_scale: 32.0 2024-03-16 02:59:28,402 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=4.14 vs. limit=10.0 2024-03-16 02:59:41,137 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=12.10 vs. limit=15.0 2024-03-16 02:59:51,075 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=7.54 vs. limit=10.0 2024-03-16 03:00:11,677 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass_mid.scale_min, batch_count=58593.333333333336, ans=0.2 2024-03-16 03:00:14,178 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=58593.333333333336, ans=0.1 2024-03-16 03:00:15,497 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer2.prob, batch_count=58593.333333333336, ans=0.125 2024-03-16 03:00:21,210 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=13.69 vs. limit=15.0 2024-03-16 03:00:21,641 INFO [train_char.py:689] (0/2) Epoch 35, batch 350, loss[loss=0.07569, simple_loss=0.1384, pruned_loss=0.00648, over 24150.00 frames. ], tot_loss[loss=0.06409, simple_loss=0.1165, pruned_loss=0.005835, over 3997513.51 frames. ], batch size: 266, lr: 9.02e-03, grad_scale: 32.0 2024-03-16 03:00:27,273 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=58626.666666666664, ans=0.125 2024-03-16 03:00:33,744 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer1.prob, batch_count=58660.0, ans=0.125 2024-03-16 03:01:07,232 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=58726.666666666664, ans=0.1 2024-03-16 03:01:08,425 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=58726.666666666664, ans=0.1 2024-03-16 03:01:15,780 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.986e+01 8.331e+01 1.020e+02 1.326e+02 2.718e+02, threshold=2.039e+02, percent-clipped=3.0 2024-03-16 03:01:27,153 INFO [train_char.py:689] (0/2) Epoch 35, batch 400, loss[loss=0.06062, simple_loss=0.09998, pruned_loss=0.01063, over 23939.00 frames. ], tot_loss[loss=0.06499, simple_loss=0.118, pruned_loss=0.005977, over 4183885.15 frames. ], batch size: 407, lr: 9.01e-03, grad_scale: 32.0 2024-03-16 03:01:31,093 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer2.prob, batch_count=58793.333333333336, ans=0.125 2024-03-16 03:01:34,134 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.self_attn2.whiten, num_groups=1, num_channels=512, metric=12.50 vs. limit=22.5 2024-03-16 03:01:34,877 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer1.prob, batch_count=58793.333333333336, ans=0.125 2024-03-16 03:01:48,156 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=12.49 vs. limit=15.0 2024-03-16 03:01:51,444 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=58826.666666666664, ans=0.125 2024-03-16 03:01:55,028 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer1.min_positive, batch_count=58860.0, ans=0.025 2024-03-16 03:02:33,079 INFO [train_char.py:689] (0/2) Epoch 35, batch 450, loss[loss=0.07534, simple_loss=0.1381, pruned_loss=0.006279, over 24133.00 frames. ], tot_loss[loss=0.06615, simple_loss=0.1203, pruned_loss=0.006011, over 4329871.96 frames. ], batch size: 223, lr: 9.00e-03, grad_scale: 32.0 2024-03-16 03:02:35,853 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=58960.0, ans=0.1 2024-03-16 03:02:48,274 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.scale_min, batch_count=58993.333333333336, ans=0.2 2024-03-16 03:03:25,661 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=2.29 vs. limit=12.0 2024-03-16 03:03:26,320 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.321e+01 8.212e+01 9.902e+01 1.368e+02 2.400e+02, threshold=1.980e+02, percent-clipped=4.0 2024-03-16 03:03:36,659 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module2.balancer1.max_abs, batch_count=59126.666666666664, ans=10.0 2024-03-16 03:03:37,703 INFO [train_char.py:689] (0/2) Epoch 35, batch 500, loss[loss=0.07131, simple_loss=0.1305, pruned_loss=0.006032, over 24065.00 frames. ], tot_loss[loss=0.06706, simple_loss=0.1219, pruned_loss=0.006092, over 4439759.13 frames. ], batch size: 251, lr: 8.99e-03, grad_scale: 32.0 2024-03-16 03:03:47,216 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-35.pt 2024-03-16 03:04:38,877 INFO [train_char.py:689] (0/2) Epoch 36, batch 0, loss[loss=0.06619, simple_loss=0.123, pruned_loss=0.004687, over 24363.00 frames. ], tot_loss[loss=0.06619, simple_loss=0.123, pruned_loss=0.004687, over 24363.00 frames. ], batch size: 180, lr: 8.86e-03, grad_scale: 32.0 2024-03-16 03:04:38,877 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 03:04:46,598 INFO [zipformer.py:1858] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([4.2830, 3.6826, 3.8557, 3.7978], device='cuda:0') 2024-03-16 03:04:52,703 INFO [train_char.py:721] (0/2) Epoch 36, validation: loss=0.05972, simple_loss=0.1105, pruned_loss=0.004482, over 657665.00 frames. 2024-03-16 03:04:52,704 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 03:04:58,540 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward3.hidden_balancer.prob, batch_count=59150.0, ans=0.125 2024-03-16 03:05:06,788 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.ff2_skip_rate, batch_count=59183.333333333336, ans=0.0 2024-03-16 03:05:13,392 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=59183.333333333336, ans=0.125 2024-03-16 03:06:02,824 INFO [train_char.py:689] (0/2) Epoch 36, batch 50, loss[loss=0.0578, simple_loss=0.1083, pruned_loss=0.003649, over 24239.00 frames. ], tot_loss[loss=0.06331, simple_loss=0.1147, pruned_loss=0.005953, over 1081236.03 frames. ], batch size: 134, lr: 8.85e-03, grad_scale: 32.0 2024-03-16 03:06:19,727 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.balancer1.prob, batch_count=59350.0, ans=0.125 2024-03-16 03:06:40,168 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass.skip_rate, batch_count=59383.333333333336, ans=0.04949747468305833 2024-03-16 03:06:43,316 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=12.43 vs. limit=15.0 2024-03-16 03:06:53,622 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=12.59 vs. limit=15.0 2024-03-16 03:06:54,085 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.032e+01 8.670e+01 1.104e+02 1.415e+02 2.881e+02, threshold=2.207e+02, percent-clipped=7.0 2024-03-16 03:07:08,427 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 03:07:14,601 INFO [train_char.py:689] (0/2) Epoch 36, batch 100, loss[loss=0.0713, simple_loss=0.1288, pruned_loss=0.00692, over 24326.00 frames. ], tot_loss[loss=0.06356, simple_loss=0.1153, pruned_loss=0.005911, over 1909705.85 frames. ], batch size: 180, lr: 8.84e-03, grad_scale: 32.0 2024-03-16 03:07:39,001 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.prob, batch_count=59516.666666666664, ans=0.125 2024-03-16 03:07:40,237 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module2.balancer2.prob, batch_count=59516.666666666664, ans=0.125 2024-03-16 03:07:51,778 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=59550.0, ans=0.125 2024-03-16 03:07:55,625 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=59550.0, ans=0.125 2024-03-16 03:08:17,587 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer1.prob, batch_count=59616.666666666664, ans=0.125 2024-03-16 03:08:19,535 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.whiten, num_groups=1, num_channels=384, metric=7.07 vs. limit=12.0 2024-03-16 03:08:23,810 INFO [train_char.py:689] (0/2) Epoch 36, batch 150, loss[loss=0.05313, simple_loss=0.1023, pruned_loss=0.001984, over 24261.00 frames. ], tot_loss[loss=0.06438, simple_loss=0.1168, pruned_loss=0.005965, over 2551963.10 frames. ], batch size: 140, lr: 8.83e-03, grad_scale: 32.0 2024-03-16 03:08:31,873 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer2.prob, batch_count=59650.0, ans=0.125 2024-03-16 03:08:31,945 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=59650.0, ans=0.1 2024-03-16 03:08:32,515 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=256, metric=11.04 vs. limit=15.0 2024-03-16 03:08:34,476 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass_mid.scale_min, batch_count=59650.0, ans=0.2 2024-03-16 03:08:35,945 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=59683.333333333336, ans=0.1 2024-03-16 03:08:52,781 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.scale_min, batch_count=59716.666666666664, ans=0.2 2024-03-16 03:08:54,430 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=10.86 vs. limit=15.0 2024-03-16 03:09:00,780 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=384, metric=6.90 vs. limit=15.0 2024-03-16 03:09:07,548 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.412e+01 8.156e+01 1.033e+02 1.452e+02 2.863e+02, threshold=2.065e+02, percent-clipped=1.0 2024-03-16 03:09:24,606 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer1.prob, batch_count=59783.333333333336, ans=0.125 2024-03-16 03:09:28,276 INFO [train_char.py:689] (0/2) Epoch 36, batch 200, loss[loss=0.0502, simple_loss=0.0952, pruned_loss=0.0026, over 24130.00 frames. ], tot_loss[loss=0.06405, simple_loss=0.1162, pruned_loss=0.005966, over 3054082.38 frames. ], batch size: 122, lr: 8.81e-03, grad_scale: 32.0 2024-03-16 03:09:50,923 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module1.balancer2.min_abs, batch_count=59850.0, ans=0.5 2024-03-16 03:10:23,155 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=59950.0, ans=0.125 2024-03-16 03:10:36,402 INFO [train_char.py:689] (0/2) Epoch 36, batch 250, loss[loss=0.06561, simple_loss=0.1165, pruned_loss=0.007342, over 24225.00 frames. ], tot_loss[loss=0.06383, simple_loss=0.1158, pruned_loss=0.005912, over 3444251.57 frames. ], batch size: 328, lr: 8.80e-03, grad_scale: 32.0 2024-03-16 03:10:43,168 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=59983.333333333336, ans=0.0 2024-03-16 03:11:03,682 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.attention_skip_rate, batch_count=60016.666666666664, ans=0.0 2024-03-16 03:11:23,805 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.765e+01 9.277e+01 1.100e+02 1.418e+02 3.207e+02, threshold=2.200e+02, percent-clipped=6.0 2024-03-16 03:11:27,146 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=10.29 vs. limit=15.0 2024-03-16 03:11:44,087 INFO [train_char.py:689] (0/2) Epoch 36, batch 300, loss[loss=0.05698, simple_loss=0.1043, pruned_loss=0.004825, over 24294.00 frames. ], tot_loss[loss=0.06449, simple_loss=0.1172, pruned_loss=0.005905, over 3753641.84 frames. ], batch size: 140, lr: 8.79e-03, grad_scale: 32.0 2024-03-16 03:12:17,454 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer2.prob, batch_count=60216.666666666664, ans=0.125 2024-03-16 03:12:28,927 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.balancer2.prob, batch_count=60250.0, ans=0.125 2024-03-16 03:12:33,937 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 03:12:45,179 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 03:12:53,266 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=12.25 vs. limit=15.0 2024-03-16 03:12:53,583 INFO [train_char.py:689] (0/2) Epoch 36, batch 350, loss[loss=0.06796, simple_loss=0.1186, pruned_loss=0.00866, over 24240.00 frames. ], tot_loss[loss=0.06476, simple_loss=0.1177, pruned_loss=0.005931, over 3995919.29 frames. ], batch size: 328, lr: 8.78e-03, grad_scale: 32.0 2024-03-16 03:13:03,118 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward3.out_whiten.whitening_limit, batch_count=60316.666666666664, ans=15.0 2024-03-16 03:13:36,739 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.888e+01 8.417e+01 1.067e+02 1.450e+02 2.294e+02, threshold=2.134e+02, percent-clipped=4.0 2024-03-16 03:13:46,971 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=60450.0, ans=0.125 2024-03-16 03:13:56,786 INFO [train_char.py:689] (0/2) Epoch 36, batch 400, loss[loss=0.06152, simple_loss=0.113, pruned_loss=0.005019, over 24456.00 frames. ], tot_loss[loss=0.0651, simple_loss=0.1182, pruned_loss=0.005982, over 4184823.99 frames. ], batch size: 165, lr: 8.77e-03, grad_scale: 32.0 2024-03-16 03:13:59,644 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.ff3_skip_rate, batch_count=60483.333333333336, ans=0.0 2024-03-16 03:14:00,899 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.balancer2.prob, batch_count=60483.333333333336, ans=0.125 2024-03-16 03:14:05,362 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.conv_module1.whiten, num_groups=1, num_channels=512, metric=3.62 vs. limit=15.0 2024-03-16 03:14:17,543 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_skip_rate, batch_count=60516.666666666664, ans=0.0 2024-03-16 03:14:23,989 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_skip_rate, batch_count=60516.666666666664, ans=0.0 2024-03-16 03:14:35,267 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=60550.0, ans=0.0 2024-03-16 03:15:02,670 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=18.85 vs. limit=22.5 2024-03-16 03:15:04,157 INFO [train_char.py:689] (0/2) Epoch 36, batch 450, loss[loss=0.07518, simple_loss=0.1362, pruned_loss=0.00708, over 24015.00 frames. ], tot_loss[loss=0.06636, simple_loss=0.1206, pruned_loss=0.006083, over 4328806.01 frames. ], batch size: 236, lr: 8.76e-03, grad_scale: 32.0 2024-03-16 03:15:31,677 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=60716.666666666664, ans=0.125 2024-03-16 03:15:40,566 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=60716.666666666664, ans=0.1 2024-03-16 03:15:44,336 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=60750.0, ans=0.125 2024-03-16 03:15:49,399 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.balancer.max_positive, batch_count=60750.0, ans=0.95 2024-03-16 03:15:50,393 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.868e+01 8.279e+01 1.038e+02 1.350e+02 2.212e+02, threshold=2.075e+02, percent-clipped=2.0 2024-03-16 03:15:53,045 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 03:15:56,802 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer2.prob, batch_count=60783.333333333336, ans=0.125 2024-03-16 03:16:11,421 INFO [train_char.py:689] (0/2) Epoch 36, batch 500, loss[loss=0.07418, simple_loss=0.1343, pruned_loss=0.007021, over 24238.00 frames. ], tot_loss[loss=0.06757, simple_loss=0.1229, pruned_loss=0.006134, over 4440484.14 frames. ], batch size: 212, lr: 8.75e-03, grad_scale: 32.0 2024-03-16 03:16:20,427 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-36.pt 2024-03-16 03:17:06,749 INFO [train_char.py:689] (0/2) Epoch 37, batch 0, loss[loss=0.067, simple_loss=0.123, pruned_loss=0.005504, over 24301.00 frames. ], tot_loss[loss=0.067, simple_loss=0.123, pruned_loss=0.005504, over 24301.00 frames. ], batch size: 267, lr: 8.63e-03, grad_scale: 32.0 2024-03-16 03:17:06,750 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 03:17:20,473 INFO [train_char.py:721] (0/2) Epoch 37, validation: loss=0.05978, simple_loss=0.1109, pruned_loss=0.004319, over 657665.00 frames. 2024-03-16 03:17:20,473 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 03:17:29,433 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=384, metric=10.62 vs. limit=15.0 2024-03-16 03:17:39,567 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=60873.333333333336, ans=0.125 2024-03-16 03:17:42,299 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.scale_min, batch_count=60873.333333333336, ans=0.2 2024-03-16 03:17:52,145 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass.scale_min, batch_count=60906.666666666664, ans=0.2 2024-03-16 03:18:19,906 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.balancer1.prob, batch_count=60973.333333333336, ans=0.125 2024-03-16 03:18:27,636 INFO [train_char.py:689] (0/2) Epoch 37, batch 50, loss[loss=0.05706, simple_loss=0.103, pruned_loss=0.00558, over 24135.00 frames. ], tot_loss[loss=0.06295, simple_loss=0.114, pruned_loss=0.005945, over 1078789.29 frames. ], batch size: 362, lr: 8.62e-03, grad_scale: 32.0 2024-03-16 03:18:28,791 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=144, metric=8.92 vs. limit=10.0 2024-03-16 03:18:43,671 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.bypass.scale_min, batch_count=61006.666666666664, ans=0.2 2024-03-16 03:18:50,044 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module2.balancer1.prob, batch_count=61040.0, ans=0.125 2024-03-16 03:19:02,007 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.self_attn2.whiten, num_groups=1, num_channels=384, metric=16.35 vs. limit=22.5 2024-03-16 03:19:07,805 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.880e+01 8.751e+01 1.106e+02 1.447e+02 2.400e+02, threshold=2.212e+02, percent-clipped=3.0 2024-03-16 03:19:18,503 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=61106.666666666664, ans=0.1 2024-03-16 03:19:32,722 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module1.balancer2.prob, batch_count=61140.0, ans=0.125 2024-03-16 03:19:37,788 INFO [train_char.py:689] (0/2) Epoch 37, batch 100, loss[loss=0.06771, simple_loss=0.1243, pruned_loss=0.005586, over 24214.00 frames. ], tot_loss[loss=0.06406, simple_loss=0.1167, pruned_loss=0.005689, over 1901774.12 frames. ], batch size: 311, lr: 8.61e-03, grad_scale: 32.0 2024-03-16 03:19:57,920 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.conv_module1.whiten, num_groups=1, num_channels=512, metric=3.44 vs. limit=15.0 2024-03-16 03:19:58,082 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=11.28 vs. limit=15.0 2024-03-16 03:20:03,964 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.balancer1.prob, batch_count=61240.0, ans=0.125 2024-03-16 03:20:15,087 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff2_skip_rate, batch_count=61273.333333333336, ans=0.0 2024-03-16 03:20:42,523 INFO [train_char.py:689] (0/2) Epoch 37, batch 150, loss[loss=0.07484, simple_loss=0.1338, pruned_loss=0.007926, over 24109.00 frames. ], tot_loss[loss=0.06371, simple_loss=0.1165, pruned_loss=0.005452, over 2551220.12 frames. ], batch size: 279, lr: 8.60e-03, grad_scale: 32.0 2024-03-16 03:21:16,985 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.525e+01 8.338e+01 1.063e+02 1.477e+02 3.765e+02, threshold=2.125e+02, percent-clipped=4.0 2024-03-16 03:21:26,153 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=61440.0, ans=0.125 2024-03-16 03:21:41,555 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 03:21:50,333 INFO [train_char.py:689] (0/2) Epoch 37, batch 200, loss[loss=0.05629, simple_loss=0.09642, pruned_loss=0.008074, over 23797.00 frames. ], tot_loss[loss=0.06347, simple_loss=0.1163, pruned_loss=0.005307, over 3051462.93 frames. ], batch size: 439, lr: 8.59e-03, grad_scale: 32.0 2024-03-16 03:21:56,700 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 03:22:57,629 INFO [train_char.py:689] (0/2) Epoch 37, batch 250, loss[loss=0.06686, simple_loss=0.1247, pruned_loss=0.004535, over 24105.00 frames. ], tot_loss[loss=0.06382, simple_loss=0.1167, pruned_loss=0.005487, over 3439139.38 frames. ], batch size: 188, lr: 8.58e-03, grad_scale: 32.0 2024-03-16 03:23:06,701 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module1.balancer2.prob, batch_count=61673.333333333336, ans=0.125 2024-03-16 03:23:15,564 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass_mid.scale_min, batch_count=61706.666666666664, ans=0.2 2024-03-16 03:23:22,228 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=384, metric=21.04 vs. limit=22.5 2024-03-16 03:23:31,473 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.916e+01 8.382e+01 1.027e+02 1.348e+02 2.463e+02, threshold=2.054e+02, percent-clipped=8.0 2024-03-16 03:23:43,293 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass.scale_min, batch_count=61773.333333333336, ans=0.2 2024-03-16 03:23:45,906 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff3_skip_rate, batch_count=61773.333333333336, ans=0.0 2024-03-16 03:23:52,242 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass.scale_min, batch_count=61806.666666666664, ans=0.2 2024-03-16 03:24:00,724 INFO [train_char.py:689] (0/2) Epoch 37, batch 300, loss[loss=0.06826, simple_loss=0.1234, pruned_loss=0.006547, over 24261.00 frames. ], tot_loss[loss=0.06406, simple_loss=0.1171, pruned_loss=0.005529, over 3745535.57 frames. ], batch size: 296, lr: 8.57e-03, grad_scale: 32.0 2024-03-16 03:24:07,290 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 03:24:28,883 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=512, metric=4.28 vs. limit=15.0 2024-03-16 03:24:36,349 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=61906.666666666664, ans=0.1 2024-03-16 03:24:46,318 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=61940.0, ans=0.125 2024-03-16 03:24:47,653 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module1.balancer2.prob, batch_count=61940.0, ans=0.125 2024-03-16 03:25:07,185 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=61973.333333333336, ans=0.125 2024-03-16 03:25:09,639 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 03:25:10,696 INFO [train_char.py:689] (0/2) Epoch 37, batch 350, loss[loss=0.05891, simple_loss=0.1062, pruned_loss=0.00583, over 24156.00 frames. ], tot_loss[loss=0.06403, simple_loss=0.117, pruned_loss=0.005524, over 3985850.53 frames. ], batch size: 344, lr: 8.56e-03, grad_scale: 32.0 2024-03-16 03:25:17,168 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_skip_rate, batch_count=62006.666666666664, ans=0.0 2024-03-16 03:25:43,816 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.924e+01 8.426e+01 1.161e+02 1.562e+02 3.148e+02, threshold=2.322e+02, percent-clipped=6.0 2024-03-16 03:25:47,912 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.hidden_balancer.prob, batch_count=62106.666666666664, ans=0.125 2024-03-16 03:26:16,506 INFO [train_char.py:689] (0/2) Epoch 37, batch 400, loss[loss=0.06041, simple_loss=0.1069, pruned_loss=0.006938, over 24089.00 frames. ], tot_loss[loss=0.06468, simple_loss=0.118, pruned_loss=0.005705, over 4173677.83 frames. ], batch size: 361, lr: 8.55e-03, grad_scale: 32.0 2024-03-16 03:26:36,808 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=62206.666666666664, ans=0.1 2024-03-16 03:26:41,799 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff3_skip_rate, batch_count=62240.0, ans=0.0 2024-03-16 03:26:55,673 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 03:26:56,932 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=62273.333333333336, ans=0.0 2024-03-16 03:27:21,618 INFO [train_char.py:689] (0/2) Epoch 37, batch 450, loss[loss=0.07128, simple_loss=0.1296, pruned_loss=0.006458, over 24072.00 frames. ], tot_loss[loss=0.06539, simple_loss=0.1191, pruned_loss=0.005815, over 4320056.75 frames. ], batch size: 199, lr: 8.54e-03, grad_scale: 32.0 2024-03-16 03:27:29,093 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=62340.0, ans=0.1 2024-03-16 03:27:40,238 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.balancer2.prob, batch_count=62373.333333333336, ans=0.125 2024-03-16 03:27:41,571 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=62373.333333333336, ans=0.1 2024-03-16 03:27:54,854 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.436e+01 9.101e+01 1.096e+02 1.426e+02 2.997e+02, threshold=2.191e+02, percent-clipped=2.0 2024-03-16 03:28:24,502 INFO [train_char.py:689] (0/2) Epoch 37, batch 500, loss[loss=0.07397, simple_loss=0.1324, pruned_loss=0.007766, over 24033.00 frames. ], tot_loss[loss=0.06631, simple_loss=0.121, pruned_loss=0.005808, over 4433939.57 frames. ], batch size: 250, lr: 8.53e-03, grad_scale: 32.0 2024-03-16 03:28:27,279 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer1.prob, batch_count=62506.666666666664, ans=0.125 2024-03-16 03:28:33,253 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-37.pt 2024-03-16 03:29:25,839 INFO [train_char.py:689] (0/2) Epoch 38, batch 0, loss[loss=0.06203, simple_loss=0.1167, pruned_loss=0.003674, over 24402.00 frames. ], tot_loss[loss=0.06203, simple_loss=0.1167, pruned_loss=0.003674, over 24402.00 frames. ], batch size: 152, lr: 8.41e-03, grad_scale: 32.0 2024-03-16 03:29:25,840 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 03:29:39,844 INFO [train_char.py:721] (0/2) Epoch 38, validation: loss=0.05841, simple_loss=0.1082, pruned_loss=0.004321, over 657665.00 frames. 2024-03-16 03:29:39,845 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 03:29:45,493 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=62530.0, ans=0.1 2024-03-16 03:30:00,458 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.self_attn_weights.pos_emb_skip_rate, batch_count=62563.333333333336, ans=0.0 2024-03-16 03:30:09,735 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=62596.666666666664, ans=0.1 2024-03-16 03:30:50,499 INFO [train_char.py:689] (0/2) Epoch 38, batch 50, loss[loss=0.066, simple_loss=0.1208, pruned_loss=0.005586, over 24317.00 frames. ], tot_loss[loss=0.06264, simple_loss=0.1152, pruned_loss=0.005062, over 1079725.24 frames. ], batch size: 297, lr: 8.40e-03, grad_scale: 32.0 2024-03-16 03:31:11,049 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module1.balancer1.prob, batch_count=62730.0, ans=0.125 2024-03-16 03:31:12,840 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=6.84 vs. limit=10.0 2024-03-16 03:31:13,718 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=62730.0, ans=0.125 2024-03-16 03:31:20,020 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.727e+01 8.596e+01 1.170e+02 1.433e+02 3.372e+02, threshold=2.341e+02, percent-clipped=1.0 2024-03-16 03:31:36,107 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.nonlin_attention.whiten1.whitening_limit, batch_count=62796.666666666664, ans=10.0 2024-03-16 03:31:45,291 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=192, metric=11.02 vs. limit=15.0 2024-03-16 03:31:49,584 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward2.hidden_balancer.prob, batch_count=62830.0, ans=0.125 2024-03-16 03:31:53,313 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.balancer.min_positive, batch_count=62830.0, ans=0.05 2024-03-16 03:31:58,222 INFO [train_char.py:689] (0/2) Epoch 38, batch 100, loss[loss=0.06021, simple_loss=0.1083, pruned_loss=0.006056, over 24096.00 frames. ], tot_loss[loss=0.06178, simple_loss=0.1132, pruned_loss=0.005204, over 1908522.45 frames. ], batch size: 361, lr: 8.39e-03, grad_scale: 32.0 2024-03-16 03:32:12,988 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.ff3_skip_rate, batch_count=62896.666666666664, ans=0.0 2024-03-16 03:32:43,719 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.bypass.skip_rate, batch_count=62963.333333333336, ans=0.09899494936611666 2024-03-16 03:32:44,286 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=17.54 vs. limit=22.5 2024-03-16 03:33:02,750 INFO [train_char.py:689] (0/2) Epoch 38, batch 150, loss[loss=0.05377, simple_loss=0.1022, pruned_loss=0.002651, over 24286.00 frames. ], tot_loss[loss=0.06133, simple_loss=0.1125, pruned_loss=0.005068, over 2551771.20 frames. ], batch size: 146, lr: 8.38e-03, grad_scale: 32.0 2024-03-16 03:33:16,608 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer1.min_positive, batch_count=63030.0, ans=0.025 2024-03-16 03:33:32,248 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.556e+01 8.114e+01 1.022e+02 1.417e+02 2.954e+02, threshold=2.044e+02, percent-clipped=2.0 2024-03-16 03:33:36,374 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer1.prob, batch_count=63096.666666666664, ans=0.125 2024-03-16 03:33:40,334 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.balancer.min_positive, batch_count=63096.666666666664, ans=0.05 2024-03-16 03:34:07,865 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=63163.333333333336, ans=0.0 2024-03-16 03:34:10,832 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=11.49 vs. limit=15.0 2024-03-16 03:34:13,863 INFO [train_char.py:689] (0/2) Epoch 38, batch 200, loss[loss=0.06437, simple_loss=0.1189, pruned_loss=0.004921, over 24356.00 frames. ], tot_loss[loss=0.06218, simple_loss=0.114, pruned_loss=0.005163, over 3055163.14 frames. ], batch size: 158, lr: 8.38e-03, grad_scale: 32.0 2024-03-16 03:34:20,948 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.self_attn1.whiten, num_groups=1, num_channels=384, metric=15.33 vs. limit=22.5 2024-03-16 03:34:34,582 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_skip_rate, batch_count=63230.0, ans=0.0 2024-03-16 03:34:38,464 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 03:34:53,683 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer2.prob, batch_count=63296.666666666664, ans=0.125 2024-03-16 03:34:57,522 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=63296.666666666664, ans=0.1 2024-03-16 03:34:59,925 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=63296.666666666664, ans=0.0 2024-03-16 03:34:59,946 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=63296.666666666664, ans=0.1 2024-03-16 03:35:13,512 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=7.61 vs. limit=10.0 2024-03-16 03:35:17,836 INFO [train_char.py:689] (0/2) Epoch 38, batch 250, loss[loss=0.07038, simple_loss=0.1278, pruned_loss=0.006459, over 24160.00 frames. ], tot_loss[loss=0.06278, simple_loss=0.1148, pruned_loss=0.00538, over 3444460.35 frames. ], batch size: 188, lr: 8.37e-03, grad_scale: 32.0 2024-03-16 03:35:43,254 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.430e+01 9.189e+01 1.105e+02 1.475e+02 2.826e+02, threshold=2.209e+02, percent-clipped=4.0 2024-03-16 03:35:44,792 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 03:35:49,957 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=63430.0, ans=0.1 2024-03-16 03:36:03,767 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_skip_rate, batch_count=63463.333333333336, ans=0.0 2024-03-16 03:36:14,052 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=63496.666666666664, ans=0.125 2024-03-16 03:36:25,293 INFO [train_char.py:689] (0/2) Epoch 38, batch 300, loss[loss=0.05992, simple_loss=0.1028, pruned_loss=0.008538, over 23951.00 frames. ], tot_loss[loss=0.06307, simple_loss=0.1153, pruned_loss=0.00541, over 3753562.55 frames. ], batch size: 407, lr: 8.36e-03, grad_scale: 32.0 2024-03-16 03:36:31,806 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=9.57 vs. limit=15.0 2024-03-16 03:36:53,503 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=63596.666666666664, ans=0.0 2024-03-16 03:37:27,142 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.out_combiner.scale_min, batch_count=63663.333333333336, ans=0.2 2024-03-16 03:37:30,832 INFO [train_char.py:689] (0/2) Epoch 38, batch 350, loss[loss=0.07459, simple_loss=0.1379, pruned_loss=0.005664, over 24102.00 frames. ], tot_loss[loss=0.06393, simple_loss=0.117, pruned_loss=0.005448, over 3995338.47 frames. ], batch size: 236, lr: 8.35e-03, grad_scale: 32.0 2024-03-16 03:37:54,793 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass.skip_rate, batch_count=63763.333333333336, ans=0.07 2024-03-16 03:37:55,636 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 4.675e+01 7.628e+01 9.692e+01 1.330e+02 2.100e+02, threshold=1.938e+02, percent-clipped=0.0 2024-03-16 03:38:20,072 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=8, num_channels=256, metric=5.69 vs. limit=6.0 2024-03-16 03:38:23,995 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=7.64 vs. limit=15.0 2024-03-16 03:38:30,307 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=8.93 vs. limit=15.0 2024-03-16 03:38:36,816 INFO [train_char.py:689] (0/2) Epoch 38, batch 400, loss[loss=0.06415, simple_loss=0.1183, pruned_loss=0.005017, over 24219.00 frames. ], tot_loss[loss=0.06413, simple_loss=0.1173, pruned_loss=0.00546, over 4183619.79 frames. ], batch size: 311, lr: 8.34e-03, grad_scale: 32.0 2024-03-16 03:38:43,565 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=63863.333333333336, ans=0.125 2024-03-16 03:38:47,246 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.out_combiner.scale_min, batch_count=63863.333333333336, ans=0.2 2024-03-16 03:38:57,322 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.ff2_skip_rate, batch_count=63896.666666666664, ans=0.0 2024-03-16 03:39:22,297 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=63963.333333333336, ans=0.1 2024-03-16 03:39:24,839 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer1.prob, batch_count=63963.333333333336, ans=0.125 2024-03-16 03:39:30,051 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.out_combiner.scale_min, batch_count=63996.666666666664, ans=0.2 2024-03-16 03:39:31,113 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_skip_rate, batch_count=63996.666666666664, ans=0.0 2024-03-16 03:39:37,717 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=12.73 vs. limit=22.5 2024-03-16 03:39:40,189 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=2.39 vs. limit=12.0 2024-03-16 03:39:42,193 INFO [train_char.py:689] (0/2) Epoch 38, batch 450, loss[loss=0.05773, simple_loss=0.1052, pruned_loss=0.005114, over 24128.00 frames. ], tot_loss[loss=0.06461, simple_loss=0.1181, pruned_loss=0.005543, over 4327489.12 frames. ], batch size: 362, lr: 8.33e-03, grad_scale: 64.0 2024-03-16 03:39:54,325 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.whiten, num_groups=1, num_channels=512, metric=7.26 vs. limit=12.0 2024-03-16 03:40:03,263 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.self_attn1.whiten.whitening_limit, batch_count=64063.333333333336, ans=22.5 2024-03-16 03:40:05,254 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.hidden_balancer.prob, batch_count=64063.333333333336, ans=0.125 2024-03-16 03:40:07,421 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.187e+01 8.210e+01 9.441e+01 1.316e+02 3.422e+02, threshold=1.888e+02, percent-clipped=6.0 2024-03-16 03:40:07,669 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.skip_rate, batch_count=64096.666666666664, ans=0.07 2024-03-16 03:40:23,212 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_skip_rate, batch_count=64130.0, ans=0.0 2024-03-16 03:40:28,900 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass_mid.scale_min, batch_count=64130.0, ans=0.2 2024-03-16 03:40:31,465 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.bypass.scale_min, batch_count=64130.0, ans=0.2 2024-03-16 03:40:33,979 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.balancer1.prob, batch_count=64163.333333333336, ans=0.125 2024-03-16 03:40:40,245 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.scale_min, batch_count=64163.333333333336, ans=0.2 2024-03-16 03:40:41,501 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.attention_skip_rate, batch_count=64163.333333333336, ans=0.0 2024-03-16 03:40:46,344 INFO [train_char.py:689] (0/2) Epoch 38, batch 500, loss[loss=0.06951, simple_loss=0.1296, pruned_loss=0.0047, over 24232.00 frames. ], tot_loss[loss=0.06597, simple_loss=0.1208, pruned_loss=0.005587, over 4438610.11 frames. ], batch size: 224, lr: 8.32e-03, grad_scale: 64.0 2024-03-16 03:40:55,304 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-38.pt 2024-03-16 03:41:47,680 INFO [train_char.py:689] (0/2) Epoch 39, batch 0, loss[loss=0.06453, simple_loss=0.1174, pruned_loss=0.005856, over 24372.00 frames. ], tot_loss[loss=0.06453, simple_loss=0.1174, pruned_loss=0.005856, over 24372.00 frames. ], batch size: 158, lr: 8.21e-03, grad_scale: 64.0 2024-03-16 03:41:47,681 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 03:41:57,104 INFO [zipformer.py:1858] (0/2) name=encoder.encoders.0.layers.0.self_attn_weights, attn_weights_entropy = tensor([5.2793, 5.4520, 5.3936, 5.0813], device='cuda:0') 2024-03-16 03:42:00,373 INFO [zipformer.py:1858] (0/2) name=encoder.encoders.2.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([4.5419, 4.5081, 3.8840, 3.7179], device='cuda:0') 2024-03-16 03:42:01,449 INFO [train_char.py:721] (0/2) Epoch 39, validation: loss=0.05877, simple_loss=0.1092, pruned_loss=0.004156, over 657665.00 frames. 2024-03-16 03:42:01,450 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 03:42:06,515 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=13.53 vs. limit=15.0 2024-03-16 03:42:18,184 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.nonlin_attention.balancer.prob, batch_count=64253.333333333336, ans=0.125 2024-03-16 03:42:28,872 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.2.self_attn_weights, loss-sum=0.000e+00 2024-03-16 03:42:43,396 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass.scale_min, batch_count=64320.0, ans=0.2 2024-03-16 03:42:44,883 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.balancer1.prob, batch_count=64320.0, ans=0.125 2024-03-16 03:42:49,017 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.out_combiner.scale_min, batch_count=64320.0, ans=0.2 2024-03-16 03:42:58,125 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff3_skip_rate, batch_count=64320.0, ans=0.0 2024-03-16 03:43:11,296 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=18.87 vs. limit=22.5 2024-03-16 03:43:17,016 INFO [train_char.py:689] (0/2) Epoch 39, batch 50, loss[loss=0.07104, simple_loss=0.1324, pruned_loss=0.004827, over 24197.00 frames. ], tot_loss[loss=0.0631, simple_loss=0.1162, pruned_loss=0.005011, over 1081595.81 frames. ], batch size: 212, lr: 8.20e-03, grad_scale: 64.0 2024-03-16 03:43:27,889 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.ff2_skip_rate, batch_count=64386.666666666664, ans=0.0 2024-03-16 03:43:33,969 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.158e+01 8.215e+01 9.996e+01 1.469e+02 2.861e+02, threshold=1.999e+02, percent-clipped=8.0 2024-03-16 03:43:48,859 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer2.prob, batch_count=64453.333333333336, ans=0.125 2024-03-16 03:44:18,818 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=384, metric=15.19 vs. limit=22.5 2024-03-16 03:44:22,053 INFO [train_char.py:689] (0/2) Epoch 39, batch 100, loss[loss=0.0732, simple_loss=0.1335, pruned_loss=0.006469, over 24008.00 frames. ], tot_loss[loss=0.06302, simple_loss=0.116, pruned_loss=0.005, over 1907429.52 frames. ], batch size: 250, lr: 8.19e-03, grad_scale: 64.0 2024-03-16 03:45:00,956 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module1.balancer2.prob, batch_count=64620.0, ans=0.125 2024-03-16 03:45:28,077 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=64686.666666666664, ans=0.1 2024-03-16 03:45:31,652 INFO [train_char.py:689] (0/2) Epoch 39, batch 150, loss[loss=0.04536, simple_loss=0.07738, pruned_loss=0.006666, over 22713.00 frames. ], tot_loss[loss=0.06237, simple_loss=0.1145, pruned_loss=0.005109, over 2539587.24 frames. ], batch size: 483, lr: 8.18e-03, grad_scale: 64.0 2024-03-16 03:45:43,312 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=64720.0, ans=0.0 2024-03-16 03:45:44,706 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.bypass_mid.scale_min, batch_count=64720.0, ans=0.2 2024-03-16 03:45:52,895 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.482e+01 7.912e+01 1.019e+02 1.370e+02 2.594e+02, threshold=2.039e+02, percent-clipped=7.0 2024-03-16 03:46:06,057 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.balancer1.prob, batch_count=64786.666666666664, ans=0.125 2024-03-16 03:46:16,346 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.ff3_skip_rate, batch_count=64820.0, ans=0.0 2024-03-16 03:46:40,251 INFO [train_char.py:689] (0/2) Epoch 39, batch 200, loss[loss=0.05702, simple_loss=0.1031, pruned_loss=0.005451, over 24297.00 frames. ], tot_loss[loss=0.06291, simple_loss=0.1157, pruned_loss=0.005065, over 3048109.68 frames. ], batch size: 140, lr: 8.17e-03, grad_scale: 64.0 2024-03-16 03:46:56,339 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 03:47:25,511 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass.skip_rate, batch_count=64986.666666666664, ans=0.04949747468305833 2024-03-16 03:47:48,572 INFO [train_char.py:689] (0/2) Epoch 39, batch 250, loss[loss=0.06291, simple_loss=0.1164, pruned_loss=0.004697, over 24414.00 frames. ], tot_loss[loss=0.06299, simple_loss=0.1156, pruned_loss=0.005185, over 3437375.72 frames. ], batch size: 172, lr: 8.16e-03, grad_scale: 64.0 2024-03-16 03:48:05,231 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.308e+01 8.296e+01 1.001e+02 1.321e+02 2.719e+02, threshold=2.002e+02, percent-clipped=3.0 2024-03-16 03:48:13,335 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.balancer2.prob, batch_count=65120.0, ans=0.125 2024-03-16 03:48:23,901 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer1.prob, batch_count=65120.0, ans=0.125 2024-03-16 03:48:30,061 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer2.min_abs, batch_count=65153.333333333336, ans=0.5 2024-03-16 03:48:34,168 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.conv_module2.whiten, num_groups=1, num_channels=384, metric=3.79 vs. limit=15.0 2024-03-16 03:48:54,310 INFO [train_char.py:689] (0/2) Epoch 39, batch 300, loss[loss=0.06466, simple_loss=0.1233, pruned_loss=0.002997, over 24194.00 frames. ], tot_loss[loss=0.06297, simple_loss=0.1155, pruned_loss=0.005244, over 3747746.68 frames. ], batch size: 189, lr: 8.15e-03, grad_scale: 64.0 2024-03-16 03:48:54,589 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer2.min_abs, batch_count=65220.0, ans=0.5 2024-03-16 03:48:59,420 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=65220.0, ans=0.125 2024-03-16 03:49:01,388 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=5.70 vs. limit=10.0 2024-03-16 03:49:03,118 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module1.balancer2.prob, batch_count=65220.0, ans=0.125 2024-03-16 03:49:37,660 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.attention_skip_rate, batch_count=65320.0, ans=0.0 2024-03-16 03:49:37,744 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer2.prob, batch_count=65320.0, ans=0.125 2024-03-16 03:49:39,140 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.bypass.scale_min, batch_count=65320.0, ans=0.2 2024-03-16 03:49:41,396 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.attention_skip_rate, batch_count=65320.0, ans=0.0 2024-03-16 03:49:49,860 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_skip_rate, batch_count=65353.333333333336, ans=0.0 2024-03-16 03:49:59,806 INFO [train_char.py:689] (0/2) Epoch 39, batch 350, loss[loss=0.05882, simple_loss=0.1069, pruned_loss=0.005344, over 24164.00 frames. ], tot_loss[loss=0.06316, simple_loss=0.1159, pruned_loss=0.005228, over 3988688.54 frames. ], batch size: 344, lr: 8.14e-03, grad_scale: 32.0 2024-03-16 03:50:03,089 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=512, metric=13.13 vs. limit=22.5 2024-03-16 03:50:03,844 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_skip_rate, batch_count=65386.666666666664, ans=0.0 2024-03-16 03:50:06,488 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module2.balancer2.prob, batch_count=65386.666666666664, ans=0.125 2024-03-16 03:50:14,992 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=65420.0, ans=0.1 2024-03-16 03:50:19,221 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.whiten, num_groups=1, num_channels=256, metric=4.56 vs. limit=12.0 2024-03-16 03:50:19,590 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.214e+01 8.670e+01 1.064e+02 1.435e+02 2.767e+02, threshold=2.127e+02, percent-clipped=6.0 2024-03-16 03:50:19,891 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_skip_rate, batch_count=65420.0, ans=0.0 2024-03-16 03:50:33,483 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.balancer1.prob, batch_count=65453.333333333336, ans=0.125 2024-03-16 03:50:44,440 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.2.self_attn_weights, loss-sum=0.000e+00 2024-03-16 03:50:55,486 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=65520.0, ans=0.125 2024-03-16 03:51:04,144 INFO [train_char.py:689] (0/2) Epoch 39, batch 400, loss[loss=0.06359, simple_loss=0.1169, pruned_loss=0.005168, over 24434.00 frames. ], tot_loss[loss=0.06366, simple_loss=0.1169, pruned_loss=0.005219, over 4178815.06 frames. ], batch size: 165, lr: 8.13e-03, grad_scale: 32.0 2024-03-16 03:51:21,499 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.23 vs. limit=6.0 2024-03-16 03:51:30,997 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=65620.0, ans=0.1 2024-03-16 03:51:52,166 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff3_skip_rate, batch_count=65653.33333333333, ans=0.0 2024-03-16 03:51:59,622 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer2.prob, batch_count=65686.66666666667, ans=0.125 2024-03-16 03:52:06,744 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.33 vs. limit=6.0 2024-03-16 03:52:09,729 INFO [train_char.py:689] (0/2) Epoch 39, batch 450, loss[loss=0.06724, simple_loss=0.126, pruned_loss=0.004238, over 24118.00 frames. ], tot_loss[loss=0.06465, simple_loss=0.1188, pruned_loss=0.005229, over 4326251.19 frames. ], batch size: 251, lr: 8.13e-03, grad_scale: 32.0 2024-03-16 03:52:09,998 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module1.balancer2.min_abs, batch_count=65720.0, ans=0.5 2024-03-16 03:52:13,746 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.ff3_skip_rate, batch_count=65720.0, ans=0.0 2024-03-16 03:52:28,201 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.763e+01 7.580e+01 9.089e+01 1.133e+02 2.105e+02, threshold=1.818e+02, percent-clipped=0.0 2024-03-16 03:52:28,884 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.whiten_keys.whitening_limit, batch_count=65753.33333333333, ans=6.0 2024-03-16 03:52:29,674 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer2.prob, batch_count=65753.33333333333, ans=0.125 2024-03-16 03:52:41,157 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=4.60 vs. limit=10.0 2024-03-16 03:52:52,765 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=65820.0, ans=0.1 2024-03-16 03:52:55,201 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=65820.0, ans=0.1 2024-03-16 03:53:13,559 INFO [train_char.py:689] (0/2) Epoch 39, batch 500, loss[loss=0.06983, simple_loss=0.1265, pruned_loss=0.00656, over 24125.00 frames. ], tot_loss[loss=0.06552, simple_loss=0.1204, pruned_loss=0.005294, over 4439594.94 frames. ], batch size: 188, lr: 8.12e-03, grad_scale: 32.0 2024-03-16 03:53:22,394 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-39.pt 2024-03-16 03:54:14,410 INFO [train_char.py:689] (0/2) Epoch 40, batch 0, loss[loss=0.05782, simple_loss=0.106, pruned_loss=0.004823, over 24286.00 frames. ], tot_loss[loss=0.05782, simple_loss=0.106, pruned_loss=0.004823, over 24286.00 frames. ], batch size: 328, lr: 8.01e-03, grad_scale: 32.0 2024-03-16 03:54:14,411 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 03:54:28,174 INFO [train_char.py:721] (0/2) Epoch 40, validation: loss=0.0578, simple_loss=0.1073, pruned_loss=0.004123, over 657665.00 frames. 2024-03-16 03:54:28,175 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 03:54:55,176 INFO [scaling.py:1023] (0/2) Whitening: name=encoder_embed.convnext.out_whiten, num_groups=1, num_channels=128, metric=4.21 vs. limit=5.0 2024-03-16 03:55:13,535 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_skip_rate, batch_count=66010.0, ans=0.0 2024-03-16 03:55:22,996 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=66010.0, ans=0.1 2024-03-16 03:55:25,715 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass_mid.scale_min, batch_count=66043.33333333333, ans=0.2 2024-03-16 03:55:38,197 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=8.03 vs. limit=15.0 2024-03-16 03:55:38,841 INFO [train_char.py:689] (0/2) Epoch 40, batch 50, loss[loss=0.07318, simple_loss=0.1347, pruned_loss=0.005815, over 24133.00 frames. ], tot_loss[loss=0.06329, simple_loss=0.1163, pruned_loss=0.005132, over 1085634.94 frames. ], batch size: 251, lr: 8.00e-03, grad_scale: 32.0 2024-03-16 03:55:44,452 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass.skip_rate, batch_count=66076.66666666667, ans=0.035 2024-03-16 03:55:48,243 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.621e+01 7.695e+01 9.241e+01 1.128e+02 2.685e+02, threshold=1.848e+02, percent-clipped=6.0 2024-03-16 03:56:20,147 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=66143.33333333333, ans=0.1 2024-03-16 03:56:28,212 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=5.96 vs. limit=15.0 2024-03-16 03:56:49,936 INFO [train_char.py:689] (0/2) Epoch 40, batch 100, loss[loss=0.06609, simple_loss=0.1208, pruned_loss=0.005686, over 24382.00 frames. ], tot_loss[loss=0.06274, simple_loss=0.1152, pruned_loss=0.005125, over 1910853.54 frames. ], batch size: 172, lr: 7.99e-03, grad_scale: 32.0 2024-03-16 03:56:59,116 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_skip_rate, batch_count=66243.33333333333, ans=0.0 2024-03-16 03:57:17,976 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=66310.0, ans=0.125 2024-03-16 03:57:56,194 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=66376.66666666667, ans=0.1 2024-03-16 03:57:58,487 INFO [train_char.py:689] (0/2) Epoch 40, batch 150, loss[loss=0.0638, simple_loss=0.1189, pruned_loss=0.00437, over 24338.00 frames. ], tot_loss[loss=0.06271, simple_loss=0.1148, pruned_loss=0.005302, over 2552731.90 frames. ], batch size: 180, lr: 7.99e-03, grad_scale: 32.0 2024-03-16 03:57:58,832 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 03:58:07,371 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.073e+01 8.361e+01 1.025e+02 1.396e+02 3.471e+02, threshold=2.051e+02, percent-clipped=12.0 2024-03-16 03:59:00,754 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=384, metric=19.01 vs. limit=22.5 2024-03-16 03:59:02,555 INFO [train_char.py:689] (0/2) Epoch 40, batch 200, loss[loss=0.06905, simple_loss=0.1283, pruned_loss=0.00491, over 24081.00 frames. ], tot_loss[loss=0.06235, simple_loss=0.1141, pruned_loss=0.00528, over 3054745.04 frames. ], batch size: 199, lr: 7.98e-03, grad_scale: 32.0 2024-03-16 03:59:09,240 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer2.min_abs, batch_count=66576.66666666667, ans=0.5 2024-03-16 03:59:18,905 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=66610.0, ans=0.1 2024-03-16 03:59:22,638 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=66610.0, ans=0.125 2024-03-16 03:59:30,472 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.nonlin_attention.balancer.min_positive, batch_count=66610.0, ans=0.05 2024-03-16 03:59:40,641 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/checkpoint-20000.pt 2024-03-16 03:59:43,922 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=66643.33333333333, ans=0.0 2024-03-16 03:59:49,056 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.convnext.out_balancer.prob, batch_count=66676.66666666667, ans=0.125 2024-03-16 04:00:11,989 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer1.prob, batch_count=66710.0, ans=0.125 2024-03-16 04:00:12,044 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module2.balancer2.prob, batch_count=66710.0, ans=0.125 2024-03-16 04:00:14,282 INFO [train_char.py:689] (0/2) Epoch 40, batch 250, loss[loss=0.0661, simple_loss=0.1205, pruned_loss=0.005866, over 24263.00 frames. ], tot_loss[loss=0.06252, simple_loss=0.1148, pruned_loss=0.005128, over 3446839.12 frames. ], batch size: 296, lr: 7.97e-03, grad_scale: 32.0 2024-03-16 04:00:15,899 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module1.balancer1.prob, batch_count=66743.33333333333, ans=0.125 2024-03-16 04:00:23,209 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 4.667e+01 8.395e+01 9.941e+01 1.251e+02 2.533e+02, threshold=1.988e+02, percent-clipped=4.0 2024-03-16 04:00:37,443 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer1.prob, batch_count=66776.66666666667, ans=0.125 2024-03-16 04:00:48,214 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.self_attn1.whiten, num_groups=1, num_channels=192, metric=12.59 vs. limit=22.5 2024-03-16 04:01:00,236 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.self_attn_weights.pos_emb_skip_rate, batch_count=66843.33333333333, ans=0.0 2024-03-16 04:01:21,887 INFO [train_char.py:689] (0/2) Epoch 40, batch 300, loss[loss=0.05289, simple_loss=0.1009, pruned_loss=0.002433, over 24227.00 frames. ], tot_loss[loss=0.06286, simple_loss=0.1154, pruned_loss=0.005146, over 3754901.05 frames. ], batch size: 122, lr: 7.96e-03, grad_scale: 32.0 2024-03-16 04:01:34,627 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_skip_rate, batch_count=66943.33333333333, ans=0.0 2024-03-16 04:01:35,056 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=8.34 vs. limit=15.0 2024-03-16 04:02:10,354 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.conv_module1.whiten, num_groups=1, num_channels=512, metric=3.78 vs. limit=15.0 2024-03-16 04:02:21,569 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.conv_module1.whiten, num_groups=1, num_channels=512, metric=3.35 vs. limit=15.0 2024-03-16 04:02:27,347 INFO [train_char.py:689] (0/2) Epoch 40, batch 350, loss[loss=0.0695, simple_loss=0.1228, pruned_loss=0.008101, over 24372.00 frames. ], tot_loss[loss=0.06297, simple_loss=0.1157, pruned_loss=0.005106, over 3995933.51 frames. ], batch size: 158, lr: 7.95e-03, grad_scale: 32.0 2024-03-16 04:02:28,846 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.ff2_skip_rate, batch_count=67076.66666666667, ans=0.0 2024-03-16 04:02:36,129 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.888e+01 8.606e+01 1.082e+02 1.293e+02 2.282e+02, threshold=2.165e+02, percent-clipped=4.0 2024-03-16 04:03:01,194 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=67143.33333333333, ans=0.125 2024-03-16 04:03:08,675 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=67176.66666666667, ans=0.125 2024-03-16 04:03:08,675 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.balancer.min_positive, batch_count=67176.66666666667, ans=0.05 2024-03-16 04:03:09,766 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.balancer2.prob, batch_count=67176.66666666667, ans=0.125 2024-03-16 04:03:34,231 INFO [train_char.py:689] (0/2) Epoch 40, batch 400, loss[loss=0.05873, simple_loss=0.108, pruned_loss=0.004747, over 24174.00 frames. ], tot_loss[loss=0.06385, simple_loss=0.1173, pruned_loss=0.005186, over 4183789.74 frames. ], batch size: 344, lr: 7.94e-03, grad_scale: 32.0 2024-03-16 04:03:35,648 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.self_attn_weights.pos_emb_skip_rate, batch_count=67243.33333333333, ans=0.0 2024-03-16 04:03:40,508 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.attention_skip_rate, batch_count=67243.33333333333, ans=0.0 2024-03-16 04:03:52,656 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.out_combiner.scale_min, batch_count=67276.66666666667, ans=0.2 2024-03-16 04:04:25,441 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.ff3_skip_rate, batch_count=67376.66666666667, ans=0.0 2024-03-16 04:04:37,881 INFO [train_char.py:689] (0/2) Epoch 40, batch 450, loss[loss=0.07322, simple_loss=0.1336, pruned_loss=0.006444, over 24177.00 frames. ], tot_loss[loss=0.06413, simple_loss=0.1176, pruned_loss=0.005318, over 4329111.29 frames. ], batch size: 251, lr: 7.93e-03, grad_scale: 32.0 2024-03-16 04:04:47,923 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.029e+01 8.375e+01 1.018e+02 1.239e+02 2.132e+02, threshold=2.036e+02, percent-clipped=0.0 2024-03-16 04:04:48,692 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=7.91 vs. limit=15.0 2024-03-16 04:05:13,840 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff2_skip_rate, batch_count=67476.66666666667, ans=0.0 2024-03-16 04:05:23,991 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.out_combiner.scale_min, batch_count=67510.0, ans=0.2 2024-03-16 04:05:42,052 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.balancer1.prob, batch_count=67576.66666666667, ans=0.125 2024-03-16 04:05:43,139 INFO [train_char.py:689] (0/2) Epoch 40, batch 500, loss[loss=0.07113, simple_loss=0.1314, pruned_loss=0.005446, over 24098.00 frames. ], tot_loss[loss=0.06534, simple_loss=0.12, pruned_loss=0.005324, over 4441080.01 frames. ], batch size: 223, lr: 7.92e-03, grad_scale: 32.0 2024-03-16 04:05:52,142 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-40.pt 2024-03-16 04:06:41,617 INFO [train_char.py:689] (0/2) Epoch 41, batch 0, loss[loss=0.06842, simple_loss=0.1271, pruned_loss=0.004863, over 24244.00 frames. ], tot_loss[loss=0.06842, simple_loss=0.1271, pruned_loss=0.004863, over 24244.00 frames. ], batch size: 212, lr: 7.82e-03, grad_scale: 32.0 2024-03-16 04:06:41,618 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 04:06:55,495 INFO [train_char.py:721] (0/2) Epoch 41, validation: loss=0.05831, simple_loss=0.1085, pruned_loss=0.004049, over 657665.00 frames. 2024-03-16 04:06:55,496 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 04:06:55,864 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass_mid.scale_min, batch_count=67600.0, ans=0.2 2024-03-16 04:07:10,620 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=67600.0, ans=0.1 2024-03-16 04:07:14,834 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.attention_skip_rate, batch_count=67633.33333333333, ans=0.0 2024-03-16 04:07:32,100 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass.skip_rate, batch_count=67666.66666666667, ans=0.04949747468305833 2024-03-16 04:07:37,287 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff2_skip_rate, batch_count=67666.66666666667, ans=0.0 2024-03-16 04:07:54,917 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.attention_skip_rate, batch_count=67733.33333333333, ans=0.0 2024-03-16 04:08:12,010 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.601e+01 7.689e+01 9.251e+01 1.241e+02 2.488e+02, threshold=1.850e+02, percent-clipped=3.0 2024-03-16 04:08:12,038 INFO [train_char.py:689] (0/2) Epoch 41, batch 50, loss[loss=0.05511, simple_loss=0.1033, pruned_loss=0.003468, over 24406.00 frames. ], tot_loss[loss=0.06162, simple_loss=0.1132, pruned_loss=0.005039, over 1089439.99 frames. ], batch size: 152, lr: 7.82e-03, grad_scale: 32.0 2024-03-16 04:08:28,887 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=67800.0, ans=0.0 2024-03-16 04:08:46,900 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=67833.33333333333, ans=0.125 2024-03-16 04:08:53,107 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module2.balancer1.prob, batch_count=67866.66666666667, ans=0.125 2024-03-16 04:09:07,850 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=67866.66666666667, ans=0.125 2024-03-16 04:09:10,446 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=67900.0, ans=0.0 2024-03-16 04:09:22,846 INFO [train_char.py:689] (0/2) Epoch 41, batch 100, loss[loss=0.06573, simple_loss=0.1185, pruned_loss=0.006457, over 24213.00 frames. ], tot_loss[loss=0.06173, simple_loss=0.1131, pruned_loss=0.00516, over 1916936.39 frames. ], batch size: 311, lr: 7.81e-03, grad_scale: 32.0 2024-03-16 04:09:36,126 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.balancer1.prob, batch_count=67966.66666666667, ans=0.125 2024-03-16 04:09:37,354 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.balancer1.prob, batch_count=67966.66666666667, ans=0.125 2024-03-16 04:09:44,885 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.ff3_skip_rate, batch_count=67966.66666666667, ans=0.0 2024-03-16 04:09:46,224 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module1.balancer1.prob, batch_count=67966.66666666667, ans=0.125 2024-03-16 04:09:50,437 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.prob, batch_count=68000.0, ans=0.125 2024-03-16 04:09:51,699 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=68000.0, ans=0.1 2024-03-16 04:09:57,145 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff2_skip_rate, batch_count=68000.0, ans=0.0 2024-03-16 04:09:59,635 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=68000.0, ans=0.125 2024-03-16 04:10:08,997 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.feed_forward2.out_whiten, num_groups=1, num_channels=512, metric=11.27 vs. limit=15.0 2024-03-16 04:10:22,663 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=68066.66666666667, ans=0.0 2024-03-16 04:10:27,702 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.008e+01 8.767e+01 1.112e+02 1.458e+02 2.855e+02, threshold=2.223e+02, percent-clipped=15.0 2024-03-16 04:10:27,740 INFO [train_char.py:689] (0/2) Epoch 41, batch 150, loss[loss=0.05688, simple_loss=0.1066, pruned_loss=0.00359, over 24215.00 frames. ], tot_loss[loss=0.06314, simple_loss=0.1161, pruned_loss=0.005115, over 2559356.68 frames. ], batch size: 328, lr: 7.80e-03, grad_scale: 32.0 2024-03-16 04:10:28,105 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=68100.0, ans=0.1 2024-03-16 04:10:31,244 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=8.17 vs. limit=15.0 2024-03-16 04:11:10,456 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=12.84 vs. limit=15.0 2024-03-16 04:11:11,274 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff3_skip_rate, batch_count=68200.0, ans=0.0 2024-03-16 04:11:12,678 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer1.min_positive, batch_count=68200.0, ans=0.025 2024-03-16 04:11:26,140 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.whiten, num_groups=1, num_channels=384, metric=4.31 vs. limit=12.0 2024-03-16 04:11:29,795 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=384, metric=22.67 vs. limit=22.5 2024-03-16 04:11:38,183 INFO [train_char.py:689] (0/2) Epoch 41, batch 200, loss[loss=0.0636, simple_loss=0.1162, pruned_loss=0.005504, over 24241.00 frames. ], tot_loss[loss=0.06202, simple_loss=0.1143, pruned_loss=0.004867, over 3064007.25 frames. ], batch size: 328, lr: 7.79e-03, grad_scale: 32.0 2024-03-16 04:11:57,579 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=12.28 vs. limit=15.0 2024-03-16 04:12:07,390 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.ff2_skip_rate, batch_count=68333.33333333333, ans=0.0 2024-03-16 04:12:09,956 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module1.balancer2.prob, batch_count=68333.33333333333, ans=0.125 2024-03-16 04:12:17,609 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer1.prob, batch_count=68366.66666666667, ans=0.125 2024-03-16 04:12:31,249 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.conv.8.prob, batch_count=68366.66666666667, ans=0.125 2024-03-16 04:12:44,195 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward2.hidden_balancer.prob, batch_count=68400.0, ans=0.125 2024-03-16 04:12:46,459 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 4.687e+01 8.116e+01 1.013e+02 1.265e+02 2.717e+02, threshold=2.026e+02, percent-clipped=5.0 2024-03-16 04:12:46,487 INFO [train_char.py:689] (0/2) Epoch 41, batch 250, loss[loss=0.07289, simple_loss=0.1355, pruned_loss=0.005149, over 24048.00 frames. ], tot_loss[loss=0.06262, simple_loss=0.1153, pruned_loss=0.004991, over 3450821.70 frames. ], batch size: 199, lr: 7.78e-03, grad_scale: 32.0 2024-03-16 04:13:08,670 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.2.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 04:13:21,392 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=4.11 vs. limit=6.0 2024-03-16 04:13:30,927 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.ff3_skip_rate, batch_count=68533.33333333333, ans=0.0 2024-03-16 04:13:55,239 INFO [train_char.py:689] (0/2) Epoch 41, batch 300, loss[loss=0.07393, simple_loss=0.1377, pruned_loss=0.005108, over 24116.00 frames. ], tot_loss[loss=0.06284, simple_loss=0.1158, pruned_loss=0.004936, over 3751880.69 frames. ], batch size: 236, lr: 7.77e-03, grad_scale: 32.0 2024-03-16 04:13:58,046 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.bypass_mid.scale_min, batch_count=68600.0, ans=0.2 2024-03-16 04:14:24,619 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=512, metric=15.93 vs. limit=22.5 2024-03-16 04:14:36,849 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=68700.0, ans=0.125 2024-03-16 04:14:59,196 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=68733.33333333333, ans=0.125 2024-03-16 04:15:01,134 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 4.706e+01 8.610e+01 1.058e+02 1.394e+02 2.614e+02, threshold=2.115e+02, percent-clipped=7.0 2024-03-16 04:15:01,163 INFO [train_char.py:689] (0/2) Epoch 41, batch 350, loss[loss=0.05703, simple_loss=0.1084, pruned_loss=0.002829, over 24363.00 frames. ], tot_loss[loss=0.06331, simple_loss=0.1167, pruned_loss=0.004979, over 3992096.07 frames. ], batch size: 152, lr: 7.77e-03, grad_scale: 32.0 2024-03-16 04:15:26,900 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=68833.33333333333, ans=0.125 2024-03-16 04:15:28,514 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.self_attn2.whiten, num_groups=1, num_channels=384, metric=16.73 vs. limit=22.5 2024-03-16 04:15:39,010 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.bypass_mid.scale_min, batch_count=68833.33333333333, ans=0.2 2024-03-16 04:15:42,766 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=68866.66666666667, ans=0.125 2024-03-16 04:15:53,853 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=68900.0, ans=0.1 2024-03-16 04:16:06,586 INFO [train_char.py:689] (0/2) Epoch 41, batch 400, loss[loss=0.07415, simple_loss=0.134, pruned_loss=0.007149, over 24106.00 frames. ], tot_loss[loss=0.06382, simple_loss=0.1175, pruned_loss=0.005084, over 4181198.45 frames. ], batch size: 236, lr: 7.76e-03, grad_scale: 32.0 2024-03-16 04:16:06,881 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module2.balancer1.prob, batch_count=68933.33333333333, ans=0.125 2024-03-16 04:16:31,907 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=12.75 vs. limit=15.0 2024-03-16 04:16:36,327 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer1.prob, batch_count=69000.0, ans=0.125 2024-03-16 04:17:00,377 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.balancer2.prob, batch_count=69066.66666666667, ans=0.125 2024-03-16 04:17:12,742 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.353e+01 8.046e+01 9.625e+01 1.198e+02 2.490e+02, threshold=1.925e+02, percent-clipped=3.0 2024-03-16 04:17:12,769 INFO [train_char.py:689] (0/2) Epoch 41, batch 450, loss[loss=0.06057, simple_loss=0.1117, pruned_loss=0.00472, over 24308.00 frames. ], tot_loss[loss=0.06445, simple_loss=0.1186, pruned_loss=0.005138, over 4326615.85 frames. ], batch size: 297, lr: 7.75e-03, grad_scale: 32.0 2024-03-16 04:17:36,657 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_skip_rate, batch_count=69133.33333333333, ans=0.0 2024-03-16 04:17:42,688 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_skip_rate, batch_count=69166.66666666667, ans=0.0 2024-03-16 04:18:08,221 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=69233.33333333333, ans=0.1 2024-03-16 04:18:16,746 INFO [train_char.py:689] (0/2) Epoch 41, batch 500, loss[loss=0.06756, simple_loss=0.1209, pruned_loss=0.007118, over 24188.00 frames. ], tot_loss[loss=0.06515, simple_loss=0.12, pruned_loss=0.005145, over 4440560.45 frames. ], batch size: 311, lr: 7.74e-03, grad_scale: 32.0 2024-03-16 04:18:21,281 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=69266.66666666667, ans=0.0 2024-03-16 04:18:26,132 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-41.pt 2024-03-16 04:19:15,589 INFO [train_char.py:689] (0/2) Epoch 42, batch 0, loss[loss=0.06803, simple_loss=0.1272, pruned_loss=0.004412, over 24080.00 frames. ], tot_loss[loss=0.06803, simple_loss=0.1272, pruned_loss=0.004412, over 24080.00 frames. ], batch size: 236, lr: 7.65e-03, grad_scale: 32.0 2024-03-16 04:19:15,590 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 04:19:29,720 INFO [train_char.py:721] (0/2) Epoch 42, validation: loss=0.05734, simple_loss=0.1066, pruned_loss=0.004037, over 657665.00 frames. 2024-03-16 04:19:29,721 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 04:19:32,628 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.nonlin_attention.balancer.prob, batch_count=69290.0, ans=0.125 2024-03-16 04:19:37,955 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.convnext.out_balancer.prob, batch_count=69290.0, ans=0.125 2024-03-16 04:19:54,236 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=69323.33333333333, ans=0.1 2024-03-16 04:19:54,392 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.balancer1.prob, batch_count=69323.33333333333, ans=0.125 2024-03-16 04:20:11,209 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.ff3_skip_rate, batch_count=69356.66666666667, ans=0.0 2024-03-16 04:20:15,802 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=8.70 vs. limit=10.0 2024-03-16 04:20:23,402 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 04:20:32,427 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.178e+01 8.024e+01 9.663e+01 1.246e+02 2.316e+02, threshold=1.933e+02, percent-clipped=3.0 2024-03-16 04:20:41,963 INFO [train_char.py:689] (0/2) Epoch 42, batch 50, loss[loss=0.07113, simple_loss=0.1321, pruned_loss=0.005094, over 24103.00 frames. ], tot_loss[loss=0.06193, simple_loss=0.1145, pruned_loss=0.004673, over 1087895.20 frames. ], batch size: 223, lr: 7.64e-03, grad_scale: 32.0 2024-03-16 04:21:07,298 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 04:21:58,381 INFO [train_char.py:689] (0/2) Epoch 42, batch 100, loss[loss=0.07334, simple_loss=0.1367, pruned_loss=0.004971, over 24140.00 frames. ], tot_loss[loss=0.06087, simple_loss=0.1126, pruned_loss=0.004577, over 1916511.20 frames. ], batch size: 251, lr: 7.63e-03, grad_scale: 32.0 2024-03-16 04:22:13,538 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=384, metric=7.26 vs. limit=15.0 2024-03-16 04:22:50,965 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.out_combiner.scale_min, batch_count=69756.66666666667, ans=0.2 2024-03-16 04:22:53,187 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 4.851e+01 9.031e+01 1.137e+02 1.503e+02 3.377e+02, threshold=2.274e+02, percent-clipped=9.0 2024-03-16 04:23:02,319 INFO [train_char.py:689] (0/2) Epoch 42, batch 150, loss[loss=0.06409, simple_loss=0.1197, pruned_loss=0.004245, over 21562.00 frames. ], tot_loss[loss=0.06176, simple_loss=0.1141, pruned_loss=0.004709, over 2556437.93 frames. ], batch size: 86, lr: 7.62e-03, grad_scale: 32.0 2024-03-16 04:23:03,876 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module2.balancer2.prob, batch_count=69790.0, ans=0.125 2024-03-16 04:23:06,293 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=69790.0, ans=0.125 2024-03-16 04:23:24,200 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=69823.33333333333, ans=0.125 2024-03-16 04:23:42,035 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.0.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 04:23:56,450 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.00 vs. limit=6.0 2024-03-16 04:24:06,287 INFO [train_char.py:689] (0/2) Epoch 42, batch 200, loss[loss=0.06841, simple_loss=0.128, pruned_loss=0.004392, over 24074.00 frames. ], tot_loss[loss=0.06243, simple_loss=0.1153, pruned_loss=0.004782, over 3051816.37 frames. ], batch size: 199, lr: 7.61e-03, grad_scale: 32.0 2024-03-16 04:24:23,260 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=69990.0, ans=0.125 2024-03-16 04:24:51,567 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=70056.66666666667, ans=0.1 2024-03-16 04:25:06,999 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=7.62 vs. limit=10.0 2024-03-16 04:25:08,628 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.710e+01 8.104e+01 9.775e+01 1.256e+02 2.245e+02, threshold=1.955e+02, percent-clipped=0.0 2024-03-16 04:25:17,556 INFO [train_char.py:689] (0/2) Epoch 42, batch 250, loss[loss=0.06888, simple_loss=0.1285, pruned_loss=0.00462, over 24081.00 frames. ], tot_loss[loss=0.06246, simple_loss=0.1155, pruned_loss=0.004735, over 3444463.37 frames. ], batch size: 188, lr: 7.61e-03, grad_scale: 32.0 2024-03-16 04:25:30,184 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=70156.66666666667, ans=0.0 2024-03-16 04:25:36,740 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.2.self_attn_weights, loss-sum=0.000e+00 2024-03-16 04:25:40,626 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=70156.66666666667, ans=0.1 2024-03-16 04:25:51,874 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff3_skip_rate, batch_count=70190.0, ans=0.0 2024-03-16 04:26:06,018 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=70223.33333333333, ans=0.0 2024-03-16 04:26:07,301 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.attention_skip_rate, batch_count=70256.66666666667, ans=0.0 2024-03-16 04:26:15,566 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.self_attn1.whiten, num_groups=1, num_channels=192, metric=13.34 vs. limit=22.5 2024-03-16 04:26:20,993 INFO [train_char.py:689] (0/2) Epoch 42, batch 300, loss[loss=0.05684, simple_loss=0.1068, pruned_loss=0.003465, over 24338.00 frames. ], tot_loss[loss=0.06314, simple_loss=0.1167, pruned_loss=0.004785, over 3750225.82 frames. ], batch size: 129, lr: 7.60e-03, grad_scale: 32.0 2024-03-16 04:26:38,047 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.conv_module1.whiten, num_groups=1, num_channels=384, metric=3.11 vs. limit=15.0 2024-03-16 04:26:42,652 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.attention_skip_rate, batch_count=70323.33333333333, ans=0.0 2024-03-16 04:26:49,057 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=70356.66666666667, ans=0.1 2024-03-16 04:27:20,483 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.974e+01 7.732e+01 9.408e+01 1.206e+02 2.073e+02, threshold=1.882e+02, percent-clipped=1.0 2024-03-16 04:27:29,481 INFO [train_char.py:689] (0/2) Epoch 42, batch 350, loss[loss=0.07149, simple_loss=0.1315, pruned_loss=0.005741, over 24147.00 frames. ], tot_loss[loss=0.06367, simple_loss=0.1177, pruned_loss=0.004799, over 3988570.23 frames. ], batch size: 223, lr: 7.59e-03, grad_scale: 32.0 2024-03-16 04:27:34,912 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=70456.66666666667, ans=0.125 2024-03-16 04:27:54,876 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=70523.33333333333, ans=0.1 2024-03-16 04:27:56,188 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=70523.33333333333, ans=0.125 2024-03-16 04:27:56,294 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.ff3_skip_rate, batch_count=70523.33333333333, ans=0.0 2024-03-16 04:28:19,396 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.48 vs. limit=6.0 2024-03-16 04:28:30,742 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.conv_module2.whiten, num_groups=1, num_channels=256, metric=5.33 vs. limit=15.0 2024-03-16 04:28:32,410 INFO [train_char.py:689] (0/2) Epoch 42, batch 400, loss[loss=0.07283, simple_loss=0.1367, pruned_loss=0.004475, over 24068.00 frames. ], tot_loss[loss=0.06406, simple_loss=0.1183, pruned_loss=0.004908, over 4177705.94 frames. ], batch size: 199, lr: 7.58e-03, grad_scale: 32.0 2024-03-16 04:28:54,172 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward2.hidden_balancer.prob, batch_count=70656.66666666667, ans=0.125 2024-03-16 04:28:55,327 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass_mid.scale_min, batch_count=70656.66666666667, ans=0.2 2024-03-16 04:29:00,509 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=70690.0, ans=0.1 2024-03-16 04:29:11,643 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer2.min_positive, batch_count=70690.0, ans=0.05 2024-03-16 04:29:29,639 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.709e+01 7.851e+01 1.003e+02 1.420e+02 2.377e+02, threshold=2.006e+02, percent-clipped=9.0 2024-03-16 04:29:34,710 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.bypass.skip_rate, batch_count=70756.66666666667, ans=0.07 2024-03-16 04:29:38,325 INFO [train_char.py:689] (0/2) Epoch 42, batch 450, loss[loss=0.07249, simple_loss=0.1335, pruned_loss=0.005718, over 24095.00 frames. ], tot_loss[loss=0.06425, simple_loss=0.1187, pruned_loss=0.004928, over 4323244.17 frames. ], batch size: 199, lr: 7.57e-03, grad_scale: 32.0 2024-03-16 04:29:53,014 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_skip_rate, batch_count=70823.33333333333, ans=0.0 2024-03-16 04:29:57,984 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.ff3_skip_rate, batch_count=70823.33333333333, ans=0.0 2024-03-16 04:30:01,769 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=70823.33333333333, ans=0.1 2024-03-16 04:30:05,326 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=70856.66666666667, ans=0.125 2024-03-16 04:30:09,218 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=70856.66666666667, ans=0.125 2024-03-16 04:30:16,581 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_module2.balancer1.min_positive, batch_count=70890.0, ans=0.025 2024-03-16 04:30:39,465 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.25 vs. limit=6.0 2024-03-16 04:30:43,384 INFO [train_char.py:689] (0/2) Epoch 42, batch 500, loss[loss=0.07113, simple_loss=0.1308, pruned_loss=0.005735, over 24036.00 frames. ], tot_loss[loss=0.0648, simple_loss=0.1198, pruned_loss=0.004907, over 4437094.51 frames. ], batch size: 250, lr: 7.57e-03, grad_scale: 32.0 2024-03-16 04:30:47,317 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=70956.66666666667, ans=0.0 2024-03-16 04:30:52,211 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-42.pt 2024-03-16 04:31:44,759 INFO [train_char.py:689] (0/2) Epoch 43, batch 0, loss[loss=0.06179, simple_loss=0.113, pruned_loss=0.005292, over 24415.00 frames. ], tot_loss[loss=0.06179, simple_loss=0.113, pruned_loss=0.005292, over 24415.00 frames. ], batch size: 158, lr: 7.47e-03, grad_scale: 32.0 2024-03-16 04:31:44,760 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 04:31:57,010 INFO [zipformer.py:1858] (0/2) name=encoder.encoders.5.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([4.1669, 3.8079, 3.8150, 3.7275], device='cuda:0') 2024-03-16 04:31:58,278 INFO [train_char.py:721] (0/2) Epoch 43, validation: loss=0.05703, simple_loss=0.1061, pruned_loss=0.00396, over 657665.00 frames. 2024-03-16 04:31:58,279 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 04:31:58,745 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.balancer1.prob, batch_count=70980.0, ans=0.125 2024-03-16 04:32:10,843 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=71013.33333333333, ans=0.1 2024-03-16 04:32:12,653 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=512, metric=14.08 vs. limit=15.0 2024-03-16 04:32:44,792 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.conv_module1.whiten, num_groups=1, num_channels=512, metric=4.30 vs. limit=15.0 2024-03-16 04:32:46,887 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.456e+01 8.096e+01 1.078e+02 1.458e+02 3.187e+02, threshold=2.155e+02, percent-clipped=5.0 2024-03-16 04:33:05,937 INFO [train_char.py:689] (0/2) Epoch 43, batch 50, loss[loss=0.0594, simple_loss=0.1083, pruned_loss=0.005235, over 24231.00 frames. ], tot_loss[loss=0.0605, simple_loss=0.1119, pruned_loss=0.00456, over 1084495.56 frames. ], batch size: 328, lr: 7.47e-03, grad_scale: 32.0 2024-03-16 04:33:44,778 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.bypass.scale_min, batch_count=71213.33333333333, ans=0.2 2024-03-16 04:33:50,086 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.attention_skip_rate, batch_count=71246.66666666667, ans=0.0 2024-03-16 04:33:53,869 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=71246.66666666667, ans=0.125 2024-03-16 04:33:56,764 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.conv_module1.whiten, num_groups=1, num_channels=384, metric=2.80 vs. limit=15.0 2024-03-16 04:34:08,901 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=71280.0, ans=0.1 2024-03-16 04:34:12,810 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.ff3_skip_rate, batch_count=71280.0, ans=0.0 2024-03-16 04:34:16,482 INFO [train_char.py:689] (0/2) Epoch 43, batch 100, loss[loss=0.0456, simple_loss=0.07765, pruned_loss=0.006779, over 22822.00 frames. ], tot_loss[loss=0.06039, simple_loss=0.1114, pruned_loss=0.004707, over 1909909.66 frames. ], batch size: 483, lr: 7.46e-03, grad_scale: 32.0 2024-03-16 04:34:24,100 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=512, metric=15.69 vs. limit=15.0 2024-03-16 04:34:45,911 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward3.hidden_balancer.prob, batch_count=71380.0, ans=0.125 2024-03-16 04:34:55,578 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=384, metric=4.46 vs. limit=15.0 2024-03-16 04:34:57,641 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_skip_rate, batch_count=71413.33333333333, ans=0.0 2024-03-16 04:35:00,199 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass.scale_min, batch_count=71413.33333333333, ans=0.2 2024-03-16 04:35:03,833 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.462e+01 8.199e+01 1.186e+02 1.417e+02 2.618e+02, threshold=2.372e+02, percent-clipped=5.0 2024-03-16 04:35:21,871 INFO [train_char.py:689] (0/2) Epoch 43, batch 150, loss[loss=0.06887, simple_loss=0.1253, pruned_loss=0.006229, over 24127.00 frames. ], tot_loss[loss=0.06093, simple_loss=0.1126, pruned_loss=0.00461, over 2551417.85 frames. ], batch size: 279, lr: 7.45e-03, grad_scale: 32.0 2024-03-16 04:36:33,701 INFO [train_char.py:689] (0/2) Epoch 43, batch 200, loss[loss=0.07295, simple_loss=0.1352, pruned_loss=0.005345, over 24034.00 frames. ], tot_loss[loss=0.06144, simple_loss=0.1134, pruned_loss=0.004716, over 3051592.78 frames. ], batch size: 236, lr: 7.44e-03, grad_scale: 16.0 2024-03-16 04:36:38,072 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.ff2_skip_rate, batch_count=71646.66666666667, ans=0.0 2024-03-16 04:36:39,806 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=384, metric=4.66 vs. limit=15.0 2024-03-16 04:36:44,523 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer1.prob, batch_count=71646.66666666667, ans=0.125 2024-03-16 04:36:45,775 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=71680.0, ans=0.1 2024-03-16 04:36:47,145 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_skip_rate, batch_count=71680.0, ans=0.0 2024-03-16 04:36:59,033 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=13.82 vs. limit=15.0 2024-03-16 04:37:11,422 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=10.82 vs. limit=22.5 2024-03-16 04:37:16,118 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module2.balancer2.prob, batch_count=71746.66666666667, ans=0.125 2024-03-16 04:37:20,809 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.506e+01 7.657e+01 9.834e+01 1.161e+02 2.413e+02, threshold=1.967e+02, percent-clipped=1.0 2024-03-16 04:37:37,738 INFO [train_char.py:689] (0/2) Epoch 43, batch 250, loss[loss=0.05146, simple_loss=0.09388, pruned_loss=0.004514, over 24031.00 frames. ], tot_loss[loss=0.0616, simple_loss=0.1135, pruned_loss=0.004865, over 3442320.67 frames. ], batch size: 381, lr: 7.44e-03, grad_scale: 16.0 2024-03-16 04:37:40,905 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=8.79 vs. limit=15.0 2024-03-16 04:37:43,051 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.ff2_skip_rate, batch_count=71813.33333333333, ans=0.0 2024-03-16 04:37:46,847 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer1.prob, batch_count=71813.33333333333, ans=0.125 2024-03-16 04:38:04,985 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=71880.0, ans=0.125 2024-03-16 04:38:15,376 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_skip_rate, batch_count=71913.33333333333, ans=0.0 2024-03-16 04:38:31,815 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=71946.66666666667, ans=0.1 2024-03-16 04:38:31,916 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.out_combiner.scale_min, batch_count=71946.66666666667, ans=0.2 2024-03-16 04:38:44,785 INFO [train_char.py:689] (0/2) Epoch 43, batch 300, loss[loss=0.06159, simple_loss=0.113, pruned_loss=0.005112, over 24386.00 frames. ], tot_loss[loss=0.06257, simple_loss=0.1154, pruned_loss=0.004851, over 3745762.36 frames. ], batch size: 158, lr: 7.43e-03, grad_scale: 16.0 2024-03-16 04:38:53,486 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer1.prob, batch_count=71980.0, ans=0.125 2024-03-16 04:39:00,332 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=72013.33333333333, ans=0.1 2024-03-16 04:39:21,581 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=72046.66666666667, ans=0.1 2024-03-16 04:39:35,324 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.018e+01 8.350e+01 1.044e+02 1.466e+02 2.517e+02, threshold=2.089e+02, percent-clipped=4.0 2024-03-16 04:39:41,014 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=384, metric=14.24 vs. limit=22.5 2024-03-16 04:39:48,264 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.self_attn_weights.pos_emb_skip_rate, batch_count=72113.33333333333, ans=0.0 2024-03-16 04:39:49,509 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.ff2_skip_rate, batch_count=72113.33333333333, ans=0.0 2024-03-16 04:39:51,836 INFO [train_char.py:689] (0/2) Epoch 43, batch 350, loss[loss=0.07182, simple_loss=0.1294, pruned_loss=0.007126, over 24118.00 frames. ], tot_loss[loss=0.06274, simple_loss=0.1157, pruned_loss=0.004888, over 3986285.61 frames. ], batch size: 279, lr: 7.42e-03, grad_scale: 16.0 2024-03-16 04:40:34,097 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=72246.66666666667, ans=0.1 2024-03-16 04:40:58,217 INFO [train_char.py:689] (0/2) Epoch 43, batch 400, loss[loss=0.0648, simple_loss=0.1177, pruned_loss=0.005949, over 24247.00 frames. ], tot_loss[loss=0.06337, simple_loss=0.1171, pruned_loss=0.004848, over 4176643.48 frames. ], batch size: 296, lr: 7.41e-03, grad_scale: 32.0 2024-03-16 04:41:00,402 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=4.94 vs. limit=6.0 2024-03-16 04:41:04,066 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=11.99 vs. limit=15.0 2024-03-16 04:41:04,732 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=72313.33333333333, ans=0.1 2024-03-16 04:41:25,125 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.21 vs. limit=6.0 2024-03-16 04:41:44,155 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=384, metric=20.41 vs. limit=22.5 2024-03-16 04:41:44,470 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.552e+01 7.784e+01 9.416e+01 1.205e+02 2.528e+02, threshold=1.883e+02, percent-clipped=1.0 2024-03-16 04:41:47,309 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer2.prob, batch_count=72446.66666666667, ans=0.125 2024-03-16 04:41:53,008 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=11.98 vs. limit=15.0 2024-03-16 04:42:03,527 INFO [train_char.py:689] (0/2) Epoch 43, batch 450, loss[loss=0.0671, simple_loss=0.1204, pruned_loss=0.006914, over 24393.00 frames. ], tot_loss[loss=0.06383, simple_loss=0.118, pruned_loss=0.004836, over 4324672.46 frames. ], batch size: 172, lr: 7.40e-03, grad_scale: 32.0 2024-03-16 04:42:07,405 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.ff2_skip_rate, batch_count=72480.0, ans=0.0 2024-03-16 04:42:13,059 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.conv_module1.whiten, num_groups=1, num_channels=512, metric=3.47 vs. limit=15.0 2024-03-16 04:42:23,456 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff2_skip_rate, batch_count=72513.33333333333, ans=0.0 2024-03-16 04:42:25,456 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=5.55 vs. limit=10.0 2024-03-16 04:42:32,211 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer_ff2.min_abs, batch_count=72546.66666666667, ans=0.1 2024-03-16 04:42:36,369 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=5.89 vs. limit=15.0 2024-03-16 04:42:59,570 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.conv.8.prob, batch_count=72613.33333333333, ans=0.125 2024-03-16 04:43:06,826 INFO [train_char.py:689] (0/2) Epoch 43, batch 500, loss[loss=0.06216, simple_loss=0.117, pruned_loss=0.003684, over 24334.00 frames. ], tot_loss[loss=0.06462, simple_loss=0.1196, pruned_loss=0.004822, over 4436989.72 frames. ], batch size: 172, lr: 7.40e-03, grad_scale: 32.0 2024-03-16 04:43:16,195 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-43.pt 2024-03-16 04:44:09,209 INFO [train_char.py:689] (0/2) Epoch 44, batch 0, loss[loss=0.05369, simple_loss=0.094, pruned_loss=0.006695, over 23761.00 frames. ], tot_loss[loss=0.05369, simple_loss=0.094, pruned_loss=0.006695, over 23761.00 frames. ], batch size: 439, lr: 7.31e-03, grad_scale: 32.0 2024-03-16 04:44:09,211 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 04:44:22,968 INFO [train_char.py:721] (0/2) Epoch 44, validation: loss=0.05726, simple_loss=0.1064, pruned_loss=0.004049, over 657665.00 frames. 2024-03-16 04:44:22,969 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 04:45:00,189 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=10.22 vs. limit=15.0 2024-03-16 04:45:03,367 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.514e+01 7.866e+01 9.379e+01 1.188e+02 2.924e+02, threshold=1.876e+02, percent-clipped=7.0 2024-03-16 04:45:15,166 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=72770.0, ans=0.1 2024-03-16 04:45:34,493 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer_ff3.min_abs, batch_count=72836.66666666667, ans=0.2 2024-03-16 04:45:35,505 INFO [train_char.py:689] (0/2) Epoch 44, batch 50, loss[loss=0.06405, simple_loss=0.1155, pruned_loss=0.006303, over 24210.00 frames. ], tot_loss[loss=0.06076, simple_loss=0.1123, pruned_loss=0.004612, over 1084705.53 frames. ], batch size: 311, lr: 7.30e-03, grad_scale: 32.0 2024-03-16 04:45:37,164 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=72836.66666666667, ans=0.1 2024-03-16 04:45:44,007 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.bypass_mid.scale_min, batch_count=72836.66666666667, ans=0.2 2024-03-16 04:45:48,327 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=23.57 vs. limit=15.0 2024-03-16 04:46:01,514 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=4.97 vs. limit=12.0 2024-03-16 04:46:12,643 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=72903.33333333333, ans=0.125 2024-03-16 04:46:19,170 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=72936.66666666667, ans=0.1 2024-03-16 04:46:21,053 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=512, metric=13.80 vs. limit=22.5 2024-03-16 04:46:29,663 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer2.prob, batch_count=72970.0, ans=0.125 2024-03-16 04:46:31,014 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=72970.0, ans=0.1 2024-03-16 04:46:31,415 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=12.97 vs. limit=22.5 2024-03-16 04:46:41,174 INFO [train_char.py:689] (0/2) Epoch 44, batch 100, loss[loss=0.06033, simple_loss=0.1118, pruned_loss=0.004444, over 24347.00 frames. ], tot_loss[loss=0.06081, simple_loss=0.112, pruned_loss=0.004812, over 1909521.30 frames. ], batch size: 158, lr: 7.30e-03, grad_scale: 32.0 2024-03-16 04:46:45,482 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=73003.33333333333, ans=0.1 2024-03-16 04:46:49,172 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=73003.33333333333, ans=0.125 2024-03-16 04:46:56,789 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.nonlin_attention.balancer.prob, batch_count=73036.66666666667, ans=0.125 2024-03-16 04:47:19,288 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.038e+01 8.357e+01 1.042e+02 1.354e+02 2.440e+02, threshold=2.085e+02, percent-clipped=11.0 2024-03-16 04:47:27,223 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.balancer1.prob, batch_count=73103.33333333333, ans=0.125 2024-03-16 04:47:28,410 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=73103.33333333333, ans=0.125 2024-03-16 04:47:29,566 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer1.prob, batch_count=73103.33333333333, ans=0.125 2024-03-16 04:47:53,100 INFO [train_char.py:689] (0/2) Epoch 44, batch 150, loss[loss=0.05897, simple_loss=0.1106, pruned_loss=0.00365, over 24431.00 frames. ], tot_loss[loss=0.06089, simple_loss=0.1125, pruned_loss=0.004668, over 2557466.02 frames. ], batch size: 165, lr: 7.29e-03, grad_scale: 32.0 2024-03-16 04:47:57,106 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module1.balancer1.prob, batch_count=73170.0, ans=0.125 2024-03-16 04:48:14,389 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=192, metric=13.93 vs. limit=15.0 2024-03-16 04:48:28,306 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=5.66 vs. limit=10.0 2024-03-16 04:48:32,890 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=73270.0, ans=0.1 2024-03-16 04:48:57,170 INFO [train_char.py:689] (0/2) Epoch 44, batch 200, loss[loss=0.05539, simple_loss=0.1028, pruned_loss=0.00399, over 23811.00 frames. ], tot_loss[loss=0.06162, simple_loss=0.1138, pruned_loss=0.004714, over 3058695.65 frames. ], batch size: 107, lr: 7.28e-03, grad_scale: 32.0 2024-03-16 04:49:02,397 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.attention_skip_rate, batch_count=73336.66666666667, ans=0.0 2024-03-16 04:49:30,213 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=73403.33333333333, ans=0.125 2024-03-16 04:49:35,049 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.444e+01 8.244e+01 1.074e+02 1.495e+02 2.535e+02, threshold=2.148e+02, percent-clipped=6.0 2024-03-16 04:49:52,418 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=192, metric=8.84 vs. limit=15.0 2024-03-16 04:50:00,031 INFO [train_char.py:689] (0/2) Epoch 44, batch 250, loss[loss=0.05771, simple_loss=0.1072, pruned_loss=0.004106, over 24415.00 frames. ], tot_loss[loss=0.06093, simple_loss=0.1128, pruned_loss=0.004548, over 3447617.32 frames. ], batch size: 158, lr: 7.27e-03, grad_scale: 32.0 2024-03-16 04:50:01,551 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module1.balancer2.prob, batch_count=73503.33333333333, ans=0.125 2024-03-16 04:50:11,585 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=73536.66666666667, ans=0.1 2024-03-16 04:51:07,803 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.ff3_skip_rate, batch_count=73636.66666666667, ans=0.0 2024-03-16 04:51:10,166 INFO [train_char.py:689] (0/2) Epoch 44, batch 300, loss[loss=0.06253, simple_loss=0.1163, pruned_loss=0.004389, over 24331.00 frames. ], tot_loss[loss=0.06132, simple_loss=0.1136, pruned_loss=0.004491, over 3754545.79 frames. ], batch size: 297, lr: 7.27e-03, grad_scale: 32.0 2024-03-16 04:51:33,773 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.self_attn1.whiten.whitening_limit, batch_count=73703.33333333333, ans=22.5 2024-03-16 04:51:45,593 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.scale_min, batch_count=73736.66666666667, ans=0.2 2024-03-16 04:51:47,940 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.439e+01 7.602e+01 9.912e+01 1.258e+02 2.134e+02, threshold=1.982e+02, percent-clipped=0.0 2024-03-16 04:51:54,826 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=73770.0, ans=0.125 2024-03-16 04:52:13,578 INFO [train_char.py:689] (0/2) Epoch 44, batch 350, loss[loss=0.06314, simple_loss=0.1147, pruned_loss=0.005774, over 21287.00 frames. ], tot_loss[loss=0.0618, simple_loss=0.1147, pruned_loss=0.004471, over 3988288.36 frames. ], batch size: 85, lr: 7.26e-03, grad_scale: 32.0 2024-03-16 04:52:13,784 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module1.balancer2.prob, batch_count=73836.66666666667, ans=0.125 2024-03-16 04:52:51,076 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_skip_rate, batch_count=73903.33333333333, ans=0.0 2024-03-16 04:52:53,592 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=73903.33333333333, ans=0.1 2024-03-16 04:52:58,672 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module1.balancer2.prob, batch_count=73936.66666666667, ans=0.125 2024-03-16 04:53:21,176 INFO [train_char.py:689] (0/2) Epoch 44, batch 400, loss[loss=0.05863, simple_loss=0.1078, pruned_loss=0.004756, over 24189.00 frames. ], tot_loss[loss=0.06219, simple_loss=0.1154, pruned_loss=0.004512, over 4174528.47 frames. ], batch size: 344, lr: 7.25e-03, grad_scale: 32.0 2024-03-16 04:53:25,770 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=7.44 vs. limit=10.0 2024-03-16 04:53:34,714 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.whiten, num_groups=1, num_channels=256, metric=4.98 vs. limit=12.0 2024-03-16 04:54:00,568 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.435e+01 7.694e+01 1.010e+02 1.300e+02 2.560e+02, threshold=2.020e+02, percent-clipped=2.0 2024-03-16 04:54:12,314 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.balancer2.prob, batch_count=74103.33333333333, ans=0.125 2024-03-16 04:54:21,142 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=74136.66666666667, ans=0.0 2024-03-16 04:54:26,917 INFO [train_char.py:689] (0/2) Epoch 44, batch 450, loss[loss=0.0645, simple_loss=0.1163, pruned_loss=0.006325, over 24206.00 frames. ], tot_loss[loss=0.06301, simple_loss=0.1168, pruned_loss=0.004625, over 4320603.82 frames. ], batch size: 311, lr: 7.24e-03, grad_scale: 32.0 2024-03-16 04:54:47,051 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass.scale_min, batch_count=74203.33333333333, ans=0.2 2024-03-16 04:54:48,359 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff2_skip_rate, batch_count=74203.33333333333, ans=0.0 2024-03-16 04:54:49,100 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.conv_module2.whiten, num_groups=1, num_channels=192, metric=10.11 vs. limit=15.0 2024-03-16 04:54:51,053 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.attention_skip_rate, batch_count=74236.66666666667, ans=0.0 2024-03-16 04:55:00,347 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer2.prob, batch_count=74236.66666666667, ans=0.125 2024-03-16 04:55:25,728 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.bypass_mid.scale_min, batch_count=74303.33333333333, ans=0.2 2024-03-16 04:55:30,538 INFO [train_char.py:689] (0/2) Epoch 44, batch 500, loss[loss=0.06614, simple_loss=0.1239, pruned_loss=0.004204, over 24098.00 frames. ], tot_loss[loss=0.06405, simple_loss=0.1188, pruned_loss=0.004664, over 4434193.44 frames. ], batch size: 251, lr: 7.24e-03, grad_scale: 32.0 2024-03-16 04:55:34,637 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.bypass_mid.scale_min, batch_count=74336.66666666667, ans=0.2 2024-03-16 04:55:39,238 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-44.pt 2024-03-16 04:56:29,222 INFO [train_char.py:689] (0/2) Epoch 45, batch 0, loss[loss=0.06586, simple_loss=0.121, pruned_loss=0.005341, over 24094.00 frames. ], tot_loss[loss=0.06586, simple_loss=0.121, pruned_loss=0.005341, over 24094.00 frames. ], batch size: 223, lr: 7.15e-03, grad_scale: 32.0 2024-03-16 04:56:29,223 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 04:56:43,110 INFO [train_char.py:721] (0/2) Epoch 45, validation: loss=0.05746, simple_loss=0.1068, pruned_loss=0.004041, over 657665.00 frames. 2024-03-16 04:56:43,111 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 04:56:54,144 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=74360.0, ans=0.1 2024-03-16 04:57:18,632 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.conv_module2.whiten, num_groups=1, num_channels=512, metric=4.57 vs. limit=15.0 2024-03-16 04:57:23,343 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.118e+01 7.188e+01 8.594e+01 1.151e+02 1.943e+02, threshold=1.719e+02, percent-clipped=0.0 2024-03-16 04:57:29,212 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.conv_module2.balancer1.prob, batch_count=74426.66666666667, ans=0.125 2024-03-16 04:57:29,773 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=9.88 vs. limit=15.0 2024-03-16 04:57:32,777 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=74460.0, ans=0.1 2024-03-16 04:57:44,137 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=15.29 vs. limit=22.5 2024-03-16 04:57:59,181 INFO [train_char.py:689] (0/2) Epoch 45, batch 50, loss[loss=0.05777, simple_loss=0.1081, pruned_loss=0.003702, over 24174.00 frames. ], tot_loss[loss=0.05986, simple_loss=0.1114, pruned_loss=0.004148, over 1085967.55 frames. ], batch size: 344, lr: 7.15e-03, grad_scale: 32.0 2024-03-16 04:58:05,923 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff3_skip_rate, batch_count=74526.66666666667, ans=0.0 2024-03-16 04:58:25,577 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer1.min_positive, batch_count=74593.33333333333, ans=0.025 2024-03-16 04:58:32,017 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module2.balancer1.prob, batch_count=74593.33333333333, ans=0.125 2024-03-16 04:58:38,803 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff2_skip_rate, batch_count=74626.66666666667, ans=0.0 2024-03-16 04:59:10,223 INFO [train_char.py:689] (0/2) Epoch 45, batch 100, loss[loss=0.06601, simple_loss=0.1221, pruned_loss=0.004951, over 24209.00 frames. ], tot_loss[loss=0.06001, simple_loss=0.1117, pruned_loss=0.00415, over 1907908.13 frames. ], batch size: 311, lr: 7.14e-03, grad_scale: 32.0 2024-03-16 04:59:10,505 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.bypass.skip_rate, batch_count=74693.33333333333, ans=0.07 2024-03-16 04:59:31,042 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=74726.66666666667, ans=0.1 2024-03-16 04:59:45,003 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.680e+01 7.803e+01 9.889e+01 1.283e+02 2.570e+02, threshold=1.978e+02, percent-clipped=8.0 2024-03-16 04:59:47,859 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.2.self_attn_weights, loss-sum=0.000e+00 2024-03-16 04:59:54,429 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module1.balancer2.prob, batch_count=74793.33333333333, ans=0.125 2024-03-16 04:59:57,084 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_skip_rate, batch_count=74793.33333333333, ans=0.0 2024-03-16 04:59:57,085 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.ff3_skip_rate, batch_count=74793.33333333333, ans=0.0 2024-03-16 04:59:59,689 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.balancer.prob, batch_count=74793.33333333333, ans=0.125 2024-03-16 05:00:19,658 INFO [train_char.py:689] (0/2) Epoch 45, batch 150, loss[loss=0.05976, simple_loss=0.1145, pruned_loss=0.0025, over 22018.00 frames. ], tot_loss[loss=0.05987, simple_loss=0.1113, pruned_loss=0.004212, over 2549738.99 frames. ], batch size: 88, lr: 7.13e-03, grad_scale: 32.0 2024-03-16 05:00:30,446 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass.scale_min, batch_count=74860.0, ans=0.2 2024-03-16 05:01:09,006 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer2.min_positive, batch_count=74960.0, ans=0.05 2024-03-16 05:01:10,231 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer1.prob, batch_count=74993.33333333333, ans=0.125 2024-03-16 05:01:16,502 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.ff3_skip_rate, batch_count=74993.33333333333, ans=0.0 2024-03-16 05:01:23,902 INFO [train_char.py:689] (0/2) Epoch 45, batch 200, loss[loss=0.07502, simple_loss=0.1373, pruned_loss=0.006365, over 24034.00 frames. ], tot_loss[loss=0.06039, simple_loss=0.1123, pruned_loss=0.00425, over 3051513.45 frames. ], batch size: 250, lr: 7.12e-03, grad_scale: 32.0 2024-03-16 05:01:36,168 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=13.23 vs. limit=15.0 2024-03-16 05:01:37,103 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff2_skip_rate, batch_count=75060.0, ans=0.0 2024-03-16 05:01:53,220 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 4.313e+01 8.035e+01 1.015e+02 1.442e+02 2.562e+02, threshold=2.030e+02, percent-clipped=7.0 2024-03-16 05:02:02,564 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=75126.66666666667, ans=0.1 2024-03-16 05:02:18,157 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module1.balancer1.prob, batch_count=75160.0, ans=0.125 2024-03-16 05:02:31,954 INFO [train_char.py:689] (0/2) Epoch 45, batch 250, loss[loss=0.0551, simple_loss=0.1055, pruned_loss=0.002351, over 23877.00 frames. ], tot_loss[loss=0.06126, simple_loss=0.1138, pruned_loss=0.00434, over 3442073.81 frames. ], batch size: 107, lr: 7.12e-03, grad_scale: 32.0 2024-03-16 05:02:41,581 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff2_skip_rate, batch_count=75193.33333333333, ans=0.0 2024-03-16 05:02:46,923 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.1.self_attn_weights, loss-sum=0.000e+00 2024-03-16 05:02:54,549 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.self_attn_weights.pos_emb_skip_rate, batch_count=75226.66666666667, ans=0.0 2024-03-16 05:03:12,445 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.bypass.scale_min, batch_count=75293.33333333333, ans=0.2 2024-03-16 05:03:29,737 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.feed_forward2.out_whiten, num_groups=1, num_channels=512, metric=11.08 vs. limit=15.0 2024-03-16 05:03:39,169 INFO [train_char.py:689] (0/2) Epoch 45, batch 300, loss[loss=0.06042, simple_loss=0.1089, pruned_loss=0.005978, over 24097.00 frames. ], tot_loss[loss=0.06093, simple_loss=0.1133, pruned_loss=0.00427, over 3751881.30 frames. ], batch size: 343, lr: 7.11e-03, grad_scale: 32.0 2024-03-16 05:03:41,915 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.nonlin_attention.balancer.prob, batch_count=75360.0, ans=0.125 2024-03-16 05:03:46,449 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.conv_module1.whiten, num_groups=1, num_channels=192, metric=5.02 vs. limit=15.0 2024-03-16 05:03:54,737 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer1.min_positive, batch_count=75393.33333333333, ans=0.025 2024-03-16 05:03:55,101 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=384, metric=11.79 vs. limit=22.5 2024-03-16 05:04:00,038 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.balancer2.prob, batch_count=75393.33333333333, ans=0.125 2024-03-16 05:04:06,154 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.bypass.scale_min, batch_count=75426.66666666667, ans=0.2 2024-03-16 05:04:08,437 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.116e+01 8.004e+01 1.023e+02 1.414e+02 2.411e+02, threshold=2.046e+02, percent-clipped=6.0 2024-03-16 05:04:23,941 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer1.min_positive, batch_count=75460.0, ans=0.025 2024-03-16 05:04:47,033 INFO [train_char.py:689] (0/2) Epoch 45, batch 350, loss[loss=0.05784, simple_loss=0.1117, pruned_loss=0.001995, over 24435.00 frames. ], tot_loss[loss=0.0614, simple_loss=0.114, pruned_loss=0.004385, over 3991469.00 frames. ], batch size: 158, lr: 7.10e-03, grad_scale: 32.0 2024-03-16 05:04:53,534 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer2.prob, batch_count=75526.66666666667, ans=0.125 2024-03-16 05:05:08,460 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_skip_rate, batch_count=75560.0, ans=0.0 2024-03-16 05:05:27,114 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=75626.66666666667, ans=0.0 2024-03-16 05:05:32,091 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_skip_rate, batch_count=75626.66666666667, ans=0.0 2024-03-16 05:05:39,457 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=75660.0, ans=0.0 2024-03-16 05:05:42,070 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module1.balancer1.prob, batch_count=75660.0, ans=0.125 2024-03-16 05:05:49,523 INFO [train_char.py:689] (0/2) Epoch 45, batch 400, loss[loss=0.07111, simple_loss=0.131, pruned_loss=0.005586, over 24072.00 frames. ], tot_loss[loss=0.06221, simple_loss=0.1155, pruned_loss=0.004479, over 4180977.40 frames. ], batch size: 251, lr: 7.10e-03, grad_scale: 32.0 2024-03-16 05:05:51,022 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module1.balancer2.prob, batch_count=75693.33333333333, ans=0.125 2024-03-16 05:06:21,138 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.459e+01 7.661e+01 9.898e+01 1.194e+02 6.153e+02, threshold=1.980e+02, percent-clipped=3.0 2024-03-16 05:06:52,385 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module2.balancer1.prob, batch_count=75826.66666666667, ans=0.125 2024-03-16 05:06:54,709 INFO [train_char.py:689] (0/2) Epoch 45, batch 450, loss[loss=0.05586, simple_loss=0.1008, pruned_loss=0.005447, over 23973.00 frames. ], tot_loss[loss=0.0629, simple_loss=0.1165, pruned_loss=0.004631, over 4325805.55 frames. ], batch size: 381, lr: 7.09e-03, grad_scale: 32.0 2024-03-16 05:06:55,533 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=256, metric=9.71 vs. limit=15.0 2024-03-16 05:06:59,892 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer1.prob, batch_count=75860.0, ans=0.125 2024-03-16 05:07:02,428 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass_mid.scale_min, batch_count=75860.0, ans=0.2 2024-03-16 05:07:26,948 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module1.balancer1.prob, batch_count=75926.66666666667, ans=0.125 2024-03-16 05:07:28,235 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass.scale_min, batch_count=75926.66666666667, ans=0.2 2024-03-16 05:07:59,724 INFO [train_char.py:689] (0/2) Epoch 45, batch 500, loss[loss=0.07001, simple_loss=0.1294, pruned_loss=0.005289, over 24095.00 frames. ], tot_loss[loss=0.06391, simple_loss=0.1185, pruned_loss=0.004641, over 4439519.11 frames. ], batch size: 199, lr: 7.08e-03, grad_scale: 32.0 2024-03-16 05:08:04,893 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=76026.66666666667, ans=0.1 2024-03-16 05:08:08,572 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-45.pt 2024-03-16 05:08:56,909 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.self_attn_weights.pos_emb_skip_rate, batch_count=76050.0, ans=0.0 2024-03-16 05:08:57,862 INFO [train_char.py:689] (0/2) Epoch 46, batch 0, loss[loss=0.06081, simple_loss=0.1135, pruned_loss=0.004079, over 24281.00 frames. ], tot_loss[loss=0.06081, simple_loss=0.1135, pruned_loss=0.004079, over 24281.00 frames. ], batch size: 297, lr: 7.00e-03, grad_scale: 32.0 2024-03-16 05:08:57,862 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 05:09:10,156 INFO [zipformer.py:1858] (0/2) name=encoder.encoders.4.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([3.4707, 3.3092, 3.3287, 3.5514], device='cuda:0') 2024-03-16 05:09:11,750 INFO [train_char.py:721] (0/2) Epoch 46, validation: loss=0.05697, simple_loss=0.106, pruned_loss=0.003992, over 657665.00 frames. 2024-03-16 05:09:11,751 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 05:09:20,629 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer1.min_positive, batch_count=76050.0, ans=0.025 2024-03-16 05:09:35,240 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass.scale_min, batch_count=76083.33333333333, ans=0.2 2024-03-16 05:09:37,489 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.078e+01 7.476e+01 1.000e+02 1.292e+02 3.318e+02, threshold=2.000e+02, percent-clipped=4.0 2024-03-16 05:09:47,174 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_skip_rate, batch_count=76116.66666666667, ans=0.0 2024-03-16 05:09:55,450 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module1.balancer1.prob, batch_count=76150.0, ans=0.125 2024-03-16 05:10:10,414 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=9.59 vs. limit=15.0 2024-03-16 05:10:23,224 INFO [train_char.py:689] (0/2) Epoch 46, batch 50, loss[loss=0.05903, simple_loss=0.1106, pruned_loss=0.003722, over 23868.00 frames. ], tot_loss[loss=0.06085, simple_loss=0.1134, pruned_loss=0.004159, over 1087110.69 frames. ], batch size: 107, lr: 7.00e-03, grad_scale: 32.0 2024-03-16 05:10:37,245 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=192, metric=11.58 vs. limit=15.0 2024-03-16 05:11:22,626 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=8, num_channels=256, metric=5.76 vs. limit=6.0 2024-03-16 05:11:32,437 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer1.prob, batch_count=76350.0, ans=0.125 2024-03-16 05:11:36,008 INFO [train_char.py:689] (0/2) Epoch 46, batch 100, loss[loss=0.06353, simple_loss=0.1193, pruned_loss=0.003857, over 24263.00 frames. ], tot_loss[loss=0.05955, simple_loss=0.1108, pruned_loss=0.004156, over 1914374.25 frames. ], batch size: 296, lr: 6.99e-03, grad_scale: 32.0 2024-03-16 05:11:53,057 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_module2.balancer2.prob, batch_count=76416.66666666667, ans=0.125 2024-03-16 05:11:56,610 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.653e+01 7.572e+01 9.774e+01 1.307e+02 2.580e+02, threshold=1.955e+02, percent-clipped=6.0 2024-03-16 05:12:14,691 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.bypass.skip_rate, batch_count=76483.33333333333, ans=0.09899494936611666 2024-03-16 05:12:26,151 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.balancer2.prob, batch_count=76516.66666666667, ans=0.125 2024-03-16 05:12:31,420 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.attention_skip_rate, batch_count=76516.66666666667, ans=0.0 2024-03-16 05:12:36,489 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=76516.66666666667, ans=0.125 2024-03-16 05:12:39,933 INFO [train_char.py:689] (0/2) Epoch 46, batch 150, loss[loss=0.07218, simple_loss=0.1361, pruned_loss=0.004127, over 24171.00 frames. ], tot_loss[loss=0.05968, simple_loss=0.1107, pruned_loss=0.004322, over 2557948.02 frames. ], batch size: 251, lr: 6.98e-03, grad_scale: 32.0 2024-03-16 05:12:50,523 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=76550.0, ans=0.1 2024-03-16 05:12:58,371 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=76583.33333333333, ans=0.125 2024-03-16 05:13:01,660 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.whiten, num_groups=1, num_channels=256, metric=5.66 vs. limit=12.0 2024-03-16 05:13:11,123 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=76616.66666666667, ans=0.1 2024-03-16 05:13:19,814 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer1.prob, batch_count=76650.0, ans=0.125 2024-03-16 05:13:26,445 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.hidden_balancer.prob, batch_count=76650.0, ans=0.125 2024-03-16 05:13:29,769 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=18.53 vs. limit=22.5 2024-03-16 05:13:37,712 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.conv_module2.whiten, num_groups=1, num_channels=192, metric=3.28 vs. limit=15.0 2024-03-16 05:13:40,980 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=4.46 vs. limit=15.0 2024-03-16 05:13:45,430 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass.scale_min, batch_count=76683.33333333333, ans=0.2 2024-03-16 05:13:47,784 INFO [train_char.py:689] (0/2) Epoch 46, batch 200, loss[loss=0.06035, simple_loss=0.1096, pruned_loss=0.00554, over 24386.00 frames. ], tot_loss[loss=0.06008, simple_loss=0.1115, pruned_loss=0.004306, over 3059538.54 frames. ], batch size: 165, lr: 6.98e-03, grad_scale: 32.0 2024-03-16 05:13:55,877 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=76716.66666666667, ans=0.1 2024-03-16 05:14:03,521 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=76750.0, ans=0.1 2024-03-16 05:14:09,634 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 6.091e+01 8.061e+01 9.831e+01 1.300e+02 2.584e+02, threshold=1.966e+02, percent-clipped=3.0 2024-03-16 05:14:27,287 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer1.prob, batch_count=76783.33333333333, ans=0.125 2024-03-16 05:14:31,245 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.bypass.scale_min, batch_count=76816.66666666667, ans=0.2 2024-03-16 05:14:54,039 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=76883.33333333333, ans=0.1 2024-03-16 05:14:55,018 INFO [train_char.py:689] (0/2) Epoch 46, batch 250, loss[loss=0.05385, simple_loss=0.09341, pruned_loss=0.007143, over 23819.00 frames. ], tot_loss[loss=0.06082, simple_loss=0.113, pruned_loss=0.004306, over 3446226.93 frames. ], batch size: 439, lr: 6.97e-03, grad_scale: 16.0 2024-03-16 05:15:01,871 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_skip_rate, batch_count=76883.33333333333, ans=0.0 2024-03-16 05:15:15,784 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.scale_min, batch_count=76916.66666666667, ans=0.2 2024-03-16 05:15:17,279 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.bypass_mid.scale_min, batch_count=76916.66666666667, ans=0.2 2024-03-16 05:15:18,518 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=76916.66666666667, ans=0.1 2024-03-16 05:15:33,644 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module1.balancer2.prob, batch_count=76983.33333333333, ans=0.125 2024-03-16 05:15:37,685 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=384, metric=4.30 vs. limit=15.0 2024-03-16 05:15:55,277 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=77016.66666666667, ans=0.125 2024-03-16 05:15:58,632 INFO [train_char.py:689] (0/2) Epoch 46, batch 300, loss[loss=0.05701, simple_loss=0.1064, pruned_loss=0.00379, over 24388.00 frames. ], tot_loss[loss=0.06065, simple_loss=0.1128, pruned_loss=0.004224, over 3753356.03 frames. ], batch size: 158, lr: 6.96e-03, grad_scale: 16.0 2024-03-16 05:16:07,753 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=77050.0, ans=0.0 2024-03-16 05:16:13,901 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=77083.33333333333, ans=0.1 2024-03-16 05:16:19,142 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module2.balancer2.prob, batch_count=77083.33333333333, ans=0.125 2024-03-16 05:16:23,743 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 4.270e+01 7.466e+01 9.405e+01 1.135e+02 2.803e+02, threshold=1.881e+02, percent-clipped=4.0 2024-03-16 05:16:24,117 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer1.prob, batch_count=77083.33333333333, ans=0.125 2024-03-16 05:16:29,548 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.conv_module1.whiten, num_groups=1, num_channels=384, metric=4.25 vs. limit=15.0 2024-03-16 05:16:37,753 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=77116.66666666667, ans=0.125 2024-03-16 05:17:07,719 INFO [train_char.py:689] (0/2) Epoch 46, batch 350, loss[loss=0.06963, simple_loss=0.1329, pruned_loss=0.003176, over 24157.00 frames. ], tot_loss[loss=0.06126, simple_loss=0.1141, pruned_loss=0.004183, over 3990704.22 frames. ], batch size: 199, lr: 6.95e-03, grad_scale: 16.0 2024-03-16 05:17:10,918 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.self_attn2.whiten, num_groups=1, num_channels=384, metric=13.38 vs. limit=22.5 2024-03-16 05:17:14,406 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module2.balancer2.prob, batch_count=77216.66666666667, ans=0.125 2024-03-16 05:17:18,206 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.balancer2.prob, batch_count=77216.66666666667, ans=0.125 2024-03-16 05:17:19,345 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.attention_skip_rate, batch_count=77250.0, ans=0.0 2024-03-16 05:17:50,993 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer1.max_abs, batch_count=77316.66666666667, ans=10.0 2024-03-16 05:17:57,803 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=77316.66666666667, ans=0.125 2024-03-16 05:18:06,603 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=77350.0, ans=0.0 2024-03-16 05:18:13,738 INFO [train_char.py:689] (0/2) Epoch 46, batch 400, loss[loss=0.05892, simple_loss=0.1092, pruned_loss=0.004343, over 24213.00 frames. ], tot_loss[loss=0.06135, simple_loss=0.1142, pruned_loss=0.00423, over 4178953.80 frames. ], batch size: 328, lr: 6.95e-03, grad_scale: 32.0 2024-03-16 05:18:36,547 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.580e+01 8.290e+01 9.923e+01 1.317e+02 2.697e+02, threshold=1.985e+02, percent-clipped=6.0 2024-03-16 05:19:02,743 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_skip_rate, batch_count=77483.33333333333, ans=0.0 2024-03-16 05:19:15,089 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=77516.66666666667, ans=0.125 2024-03-16 05:19:16,905 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=512, metric=4.53 vs. limit=15.0 2024-03-16 05:19:18,496 INFO [train_char.py:689] (0/2) Epoch 46, batch 450, loss[loss=0.07089, simple_loss=0.1302, pruned_loss=0.005796, over 24124.00 frames. ], tot_loss[loss=0.06206, simple_loss=0.1153, pruned_loss=0.004409, over 4324142.37 frames. ], batch size: 236, lr: 6.94e-03, grad_scale: 32.0 2024-03-16 05:19:40,319 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=17.31 vs. limit=22.5 2024-03-16 05:19:40,348 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=256, metric=10.48 vs. limit=15.0 2024-03-16 05:19:54,676 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.hidden_balancer.prob, batch_count=77616.66666666667, ans=0.125 2024-03-16 05:20:11,339 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.conv_module1.balancer2.prob, batch_count=77683.33333333333, ans=0.125 2024-03-16 05:20:12,445 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.nonlin_attention.balancer.prob, batch_count=77683.33333333333, ans=0.125 2024-03-16 05:20:21,998 INFO [train_char.py:689] (0/2) Epoch 46, batch 500, loss[loss=0.06988, simple_loss=0.1316, pruned_loss=0.004074, over 24035.00 frames. ], tot_loss[loss=0.06303, simple_loss=0.1173, pruned_loss=0.00437, over 4437450.27 frames. ], batch size: 199, lr: 6.93e-03, grad_scale: 16.0 2024-03-16 05:20:31,197 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-46.pt 2024-03-16 05:21:21,561 INFO [train_char.py:689] (0/2) Epoch 47, batch 0, loss[loss=0.06409, simple_loss=0.1206, pruned_loss=0.00379, over 23751.00 frames. ], tot_loss[loss=0.06409, simple_loss=0.1206, pruned_loss=0.00379, over 23751.00 frames. ], batch size: 107, lr: 6.86e-03, grad_scale: 32.0 2024-03-16 05:21:21,562 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 05:21:35,117 INFO [train_char.py:721] (0/2) Epoch 47, validation: loss=0.05658, simple_loss=0.1057, pruned_loss=0.003729, over 657665.00 frames. 2024-03-16 05:21:35,118 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 05:21:39,542 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.scale_min, batch_count=77740.0, ans=0.2 2024-03-16 05:21:39,568 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module1.balancer2.prob, batch_count=77740.0, ans=0.125 2024-03-16 05:21:48,410 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.18 vs. limit=6.0 2024-03-16 05:21:50,206 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.425e+01 7.279e+01 8.678e+01 1.182e+02 2.496e+02, threshold=1.736e+02, percent-clipped=5.0 2024-03-16 05:22:16,322 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=192, metric=7.24 vs. limit=15.0 2024-03-16 05:22:23,463 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.ff2_skip_rate, batch_count=77840.0, ans=0.0 2024-03-16 05:22:23,502 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module2.balancer1.prob, batch_count=77840.0, ans=0.125 2024-03-16 05:22:34,362 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.self_attn_weights.pos_emb_skip_rate, batch_count=77873.33333333333, ans=0.0 2024-03-16 05:22:44,372 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=5.82 vs. limit=10.0 2024-03-16 05:22:44,904 INFO [train_char.py:689] (0/2) Epoch 47, batch 50, loss[loss=0.0563, simple_loss=0.1048, pruned_loss=0.00391, over 24377.00 frames. ], tot_loss[loss=0.06033, simple_loss=0.1123, pruned_loss=0.004192, over 1079653.58 frames. ], batch size: 129, lr: 6.85e-03, grad_scale: 32.0 2024-03-16 05:22:49,416 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module1.balancer2.prob, batch_count=77906.66666666667, ans=0.125 2024-03-16 05:23:09,959 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=384, metric=4.20 vs. limit=15.0 2024-03-16 05:23:20,143 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.balancer1.prob, batch_count=77973.33333333333, ans=0.125 2024-03-16 05:23:35,878 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.ff3_skip_rate, batch_count=78006.66666666667, ans=0.0 2024-03-16 05:23:46,617 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.nonlin_attention.whiten2, num_groups=1, num_channels=512, metric=4.29 vs. limit=15.0 2024-03-16 05:23:56,049 INFO [train_char.py:689] (0/2) Epoch 47, batch 100, loss[loss=0.0669, simple_loss=0.1253, pruned_loss=0.004258, over 24114.00 frames. ], tot_loss[loss=0.06079, simple_loss=0.1133, pruned_loss=0.004167, over 1909177.49 frames. ], batch size: 199, lr: 6.84e-03, grad_scale: 32.0 2024-03-16 05:24:10,147 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 4.889e+01 7.269e+01 9.818e+01 1.349e+02 2.367e+02, threshold=1.964e+02, percent-clipped=9.0 2024-03-16 05:24:31,131 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=78140.0, ans=0.0 2024-03-16 05:24:34,814 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=78173.33333333333, ans=0.125 2024-03-16 05:24:36,019 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.convnext.layerdrop_rate, batch_count=78173.33333333333, ans=0.015 2024-03-16 05:24:36,130 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.nonlin_attention.balancer.prob, batch_count=78173.33333333333, ans=0.125 2024-03-16 05:24:52,471 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_skip_rate, batch_count=78206.66666666667, ans=0.0 2024-03-16 05:24:53,695 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.1.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 05:25:04,863 INFO [train_char.py:689] (0/2) Epoch 47, batch 150, loss[loss=0.05473, simple_loss=0.1052, pruned_loss=0.002134, over 24225.00 frames. ], tot_loss[loss=0.06042, simple_loss=0.1127, pruned_loss=0.004063, over 2554160.83 frames. ], batch size: 134, lr: 6.84e-03, grad_scale: 32.0 2024-03-16 05:25:06,405 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.balancer2.prob, batch_count=78240.0, ans=0.125 2024-03-16 05:25:08,956 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.bypass.skip_rate, batch_count=78240.0, ans=0.07 2024-03-16 05:25:18,025 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass.skip_rate, batch_count=78273.33333333333, ans=0.04949747468305833 2024-03-16 05:25:37,408 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=78306.66666666667, ans=0.125 2024-03-16 05:25:47,530 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass_mid.scale_min, batch_count=78340.0, ans=0.2 2024-03-16 05:25:49,979 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=78340.0, ans=0.0 2024-03-16 05:26:08,649 INFO [train_char.py:689] (0/2) Epoch 47, batch 200, loss[loss=0.04721, simple_loss=0.08108, pruned_loss=0.006667, over 22937.00 frames. ], tot_loss[loss=0.06003, simple_loss=0.1118, pruned_loss=0.004154, over 3053792.06 frames. ], batch size: 483, lr: 6.83e-03, grad_scale: 32.0 2024-03-16 05:26:22,774 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.616e+01 8.335e+01 9.932e+01 1.326e+02 2.634e+02, threshold=1.986e+02, percent-clipped=5.0 2024-03-16 05:26:43,452 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=78473.33333333333, ans=0.125 2024-03-16 05:27:01,445 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.bypass.scale_min, batch_count=78506.66666666667, ans=0.2 2024-03-16 05:27:15,373 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module2.balancer1.prob, batch_count=78573.33333333333, ans=0.125 2024-03-16 05:27:16,569 INFO [train_char.py:689] (0/2) Epoch 47, batch 250, loss[loss=0.05342, simple_loss=0.1024, pruned_loss=0.002235, over 24306.00 frames. ], tot_loss[loss=0.05976, simple_loss=0.1111, pruned_loss=0.004199, over 3440306.42 frames. ], batch size: 146, lr: 6.82e-03, grad_scale: 32.0 2024-03-16 05:27:27,148 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.bypass_mid.scale_min, batch_count=78573.33333333333, ans=0.2 2024-03-16 05:27:29,659 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=78606.66666666667, ans=0.1 2024-03-16 05:28:00,754 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.self_attn_weights.pos_emb_skip_rate, batch_count=78673.33333333333, ans=0.0 2024-03-16 05:28:24,289 INFO [train_char.py:689] (0/2) Epoch 47, batch 300, loss[loss=0.0525, simple_loss=0.09764, pruned_loss=0.003677, over 24005.00 frames. ], tot_loss[loss=0.06034, simple_loss=0.1123, pruned_loss=0.004203, over 3747604.03 frames. ], batch size: 381, lr: 6.82e-03, grad_scale: 32.0 2024-03-16 05:28:29,725 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.ff3_skip_rate, batch_count=78740.0, ans=0.0 2024-03-16 05:28:35,856 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_skip_rate, batch_count=78773.33333333333, ans=0.0 2024-03-16 05:28:38,178 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.527e+01 7.236e+01 9.244e+01 1.311e+02 2.742e+02, threshold=1.849e+02, percent-clipped=4.0 2024-03-16 05:28:52,106 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer2.prob, batch_count=78806.66666666667, ans=0.125 2024-03-16 05:28:52,654 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=512, metric=18.46 vs. limit=22.5 2024-03-16 05:29:00,134 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=10.91 vs. limit=15.0 2024-03-16 05:29:03,580 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.bypass.skip_rate, batch_count=78840.0, ans=0.07 2024-03-16 05:29:21,157 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.attention_skip_rate, batch_count=78873.33333333333, ans=0.0 2024-03-16 05:29:29,720 INFO [train_char.py:689] (0/2) Epoch 47, batch 350, loss[loss=0.05603, simple_loss=0.09837, pruned_loss=0.006842, over 23949.00 frames. ], tot_loss[loss=0.0608, simple_loss=0.1131, pruned_loss=0.004232, over 3985993.33 frames. ], batch size: 407, lr: 6.81e-03, grad_scale: 32.0 2024-03-16 05:29:43,966 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=256, metric=3.41 vs. limit=15.0 2024-03-16 05:29:48,772 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.convnext.out_balancer.prob, batch_count=78940.0, ans=0.125 2024-03-16 05:29:55,403 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.balancer2.prob, batch_count=78940.0, ans=0.125 2024-03-16 05:30:34,674 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.skip_rate, batch_count=79040.0, ans=0.09899494936611666 2024-03-16 05:30:35,765 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass.skip_rate, batch_count=79073.33333333333, ans=0.04949747468305833 2024-03-16 05:30:36,856 INFO [train_char.py:689] (0/2) Epoch 47, batch 400, loss[loss=0.07017, simple_loss=0.1278, pruned_loss=0.006271, over 24136.00 frames. ], tot_loss[loss=0.06142, simple_loss=0.1141, pruned_loss=0.004349, over 4174645.33 frames. ], batch size: 188, lr: 6.81e-03, grad_scale: 32.0 2024-03-16 05:30:50,569 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.395e+01 8.041e+01 9.960e+01 1.452e+02 2.854e+02, threshold=1.992e+02, percent-clipped=5.0 2024-03-16 05:30:55,807 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer1.prob, batch_count=79106.66666666667, ans=0.125 2024-03-16 05:30:58,563 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=79106.66666666667, ans=0.125 2024-03-16 05:31:09,711 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.bypass_mid.scale_min, batch_count=79140.0, ans=0.2 2024-03-16 05:31:19,526 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module2.balancer1.max_abs, batch_count=79173.33333333333, ans=10.0 2024-03-16 05:31:26,969 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module1.balancer2.min_abs, batch_count=79206.66666666667, ans=0.5 2024-03-16 05:31:32,075 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=79206.66666666667, ans=0.125 2024-03-16 05:31:41,514 INFO [train_char.py:689] (0/2) Epoch 47, batch 450, loss[loss=0.05942, simple_loss=0.1116, pruned_loss=0.003615, over 24207.00 frames. ], tot_loss[loss=0.06254, simple_loss=0.1163, pruned_loss=0.004368, over 4323189.02 frames. ], batch size: 311, lr: 6.80e-03, grad_scale: 16.0 2024-03-16 05:31:45,591 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=79240.0, ans=0.1 2024-03-16 05:32:06,120 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=12.58 vs. limit=15.0 2024-03-16 05:32:08,517 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=5.02 vs. limit=10.0 2024-03-16 05:32:14,091 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_skip_rate, batch_count=79306.66666666667, ans=0.0 2024-03-16 05:32:18,224 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=79306.66666666667, ans=0.1 2024-03-16 05:32:22,086 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.ff2_skip_rate, batch_count=79340.0, ans=0.0 2024-03-16 05:32:28,016 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer1.prob, batch_count=79340.0, ans=0.125 2024-03-16 05:32:45,850 INFO [train_char.py:689] (0/2) Epoch 47, batch 500, loss[loss=0.0705, simple_loss=0.1301, pruned_loss=0.005433, over 24067.00 frames. ], tot_loss[loss=0.06341, simple_loss=0.1183, pruned_loss=0.004286, over 4436295.71 frames. ], batch size: 236, lr: 6.79e-03, grad_scale: 16.0 2024-03-16 05:32:46,436 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=256, metric=4.48 vs. limit=15.0 2024-03-16 05:32:54,527 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-47.pt 2024-03-16 05:33:48,260 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.nonlin_attention.balancer.prob, batch_count=79430.0, ans=0.125 2024-03-16 05:33:49,261 INFO [train_char.py:689] (0/2) Epoch 48, batch 0, loss[loss=0.0668, simple_loss=0.1207, pruned_loss=0.006438, over 24048.00 frames. ], tot_loss[loss=0.0668, simple_loss=0.1207, pruned_loss=0.006438, over 24048.00 frames. ], batch size: 250, lr: 6.72e-03, grad_scale: 32.0 2024-03-16 05:33:49,262 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 05:34:03,265 INFO [train_char.py:721] (0/2) Epoch 48, validation: loss=0.0567, simple_loss=0.106, pruned_loss=0.003686, over 657665.00 frames. 2024-03-16 05:34:03,265 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 05:34:10,091 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.334e+01 6.973e+01 8.467e+01 1.015e+02 1.727e+02, threshold=1.693e+02, percent-clipped=0.0 2024-03-16 05:34:28,064 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.ff3_skip_rate, batch_count=79463.33333333333, ans=0.0 2024-03-16 05:34:33,990 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.self_attn2.whiten, num_groups=1, num_channels=192, metric=16.18 vs. limit=22.5 2024-03-16 05:34:41,351 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=79496.66666666667, ans=0.125 2024-03-16 05:35:10,716 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module1.balancer2.prob, batch_count=79563.33333333333, ans=0.125 2024-03-16 05:35:13,181 INFO [train_char.py:689] (0/2) Epoch 48, batch 50, loss[loss=0.05378, simple_loss=0.1004, pruned_loss=0.003581, over 24306.00 frames. ], tot_loss[loss=0.05878, simple_loss=0.1082, pruned_loss=0.004679, over 1085274.23 frames. ], batch size: 146, lr: 6.71e-03, grad_scale: 32.0 2024-03-16 05:35:26,230 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=79596.66666666667, ans=0.125 2024-03-16 05:35:36,004 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.nonlin_attention.whiten2, num_groups=1, num_channels=512, metric=4.09 vs. limit=15.0 2024-03-16 05:35:48,802 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.ff2_skip_rate, batch_count=79663.33333333333, ans=0.0 2024-03-16 05:36:19,582 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_skip_rate, batch_count=79730.0, ans=0.0 2024-03-16 05:36:28,346 INFO [train_char.py:689] (0/2) Epoch 48, batch 100, loss[loss=0.05485, simple_loss=0.1018, pruned_loss=0.003965, over 24128.00 frames. ], tot_loss[loss=0.05972, simple_loss=0.1104, pruned_loss=0.004516, over 1905125.74 frames. ], batch size: 362, lr: 6.71e-03, grad_scale: 32.0 2024-03-16 05:36:34,908 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.124e+01 8.336e+01 1.011e+02 1.346e+02 2.310e+02, threshold=2.023e+02, percent-clipped=10.0 2024-03-16 05:36:45,435 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=79796.66666666667, ans=0.0 2024-03-16 05:36:48,170 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module2.balancer1.max_abs, batch_count=79796.66666666667, ans=10.0 2024-03-16 05:36:49,438 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.conv_skip_rate, batch_count=79796.66666666667, ans=0.0 2024-03-16 05:36:51,899 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.nonlin_attention.balancer.prob, batch_count=79796.66666666667, ans=0.125 2024-03-16 05:36:54,638 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.hidden_balancer.prob, batch_count=79830.0, ans=0.125 2024-03-16 05:37:10,803 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=9.39 vs. limit=15.0 2024-03-16 05:37:13,047 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.whiten, num_groups=1, num_channels=384, metric=7.22 vs. limit=12.0 2024-03-16 05:37:17,548 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.ff3_skip_rate, batch_count=79863.33333333333, ans=0.0 2024-03-16 05:37:31,866 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.out_combiner.scale_min, batch_count=79930.0, ans=0.2 2024-03-16 05:37:32,915 INFO [train_char.py:689] (0/2) Epoch 48, batch 150, loss[loss=0.05135, simple_loss=0.09874, pruned_loss=0.001984, over 24357.00 frames. ], tot_loss[loss=0.05972, simple_loss=0.111, pruned_loss=0.004244, over 2549862.00 frames. ], batch size: 129, lr: 6.70e-03, grad_scale: 32.0 2024-03-16 05:37:40,185 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=8.51 vs. limit=15.0 2024-03-16 05:37:40,993 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.bypass.skip_rate, batch_count=79930.0, ans=0.04949747468305833 2024-03-16 05:37:41,671 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_whiten, num_groups=1, num_channels=384, metric=14.31 vs. limit=15.0 2024-03-16 05:37:56,300 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.balancer1.prob, batch_count=79963.33333333333, ans=0.125 2024-03-16 05:37:58,910 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/checkpoint-24000.pt 2024-03-16 05:38:03,531 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass_mid.scale_min, batch_count=79996.66666666667, ans=0.2 2024-03-16 05:38:08,714 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.bypass_mid.scale_min, batch_count=79996.66666666667, ans=0.2 2024-03-16 05:38:12,360 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=79996.66666666667, ans=0.1 2024-03-16 05:38:45,093 INFO [train_char.py:689] (0/2) Epoch 48, batch 200, loss[loss=0.0697, simple_loss=0.1297, pruned_loss=0.004832, over 24179.00 frames. ], tot_loss[loss=0.06023, simple_loss=0.1123, pruned_loss=0.004072, over 3057093.57 frames. ], batch size: 266, lr: 6.69e-03, grad_scale: 32.0 2024-03-16 05:38:51,308 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.008e+01 8.272e+01 1.042e+02 1.469e+02 3.123e+02, threshold=2.084e+02, percent-clipped=7.0 2024-03-16 05:39:23,266 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.ff2_skip_rate, batch_count=80196.66666666667, ans=0.0 2024-03-16 05:39:33,750 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.balancer.max_positive, batch_count=80196.66666666667, ans=0.95 2024-03-16 05:39:52,924 INFO [train_char.py:689] (0/2) Epoch 48, batch 250, loss[loss=0.06234, simple_loss=0.1162, pruned_loss=0.004257, over 21587.00 frames. ], tot_loss[loss=0.06062, simple_loss=0.1132, pruned_loss=0.004016, over 3441839.03 frames. ], batch size: 85, lr: 6.69e-03, grad_scale: 32.0 2024-03-16 05:40:04,686 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.conv_module1.balancer2.prob, batch_count=80296.66666666667, ans=0.125 2024-03-16 05:40:04,741 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module2.balancer1.max_abs, batch_count=80296.66666666667, ans=10.0 2024-03-16 05:40:17,745 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer2.prob, batch_count=80330.0, ans=0.125 2024-03-16 05:40:53,267 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass.skip_rate, batch_count=80396.66666666667, ans=0.09899494936611666 2024-03-16 05:40:56,929 INFO [train_char.py:689] (0/2) Epoch 48, batch 300, loss[loss=0.05842, simple_loss=0.1112, pruned_loss=0.002823, over 24449.00 frames. ], tot_loss[loss=0.06078, simple_loss=0.1135, pruned_loss=0.004026, over 3749923.55 frames. ], batch size: 165, lr: 6.68e-03, grad_scale: 32.0 2024-03-16 05:41:00,410 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=10.78 vs. limit=15.0 2024-03-16 05:41:03,443 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.399e+01 7.416e+01 8.993e+01 1.288e+02 2.345e+02, threshold=1.799e+02, percent-clipped=1.0 2024-03-16 05:41:03,689 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_skip_rate, batch_count=80430.0, ans=0.0 2024-03-16 05:42:07,117 INFO [train_char.py:689] (0/2) Epoch 48, batch 350, loss[loss=0.05657, simple_loss=0.1025, pruned_loss=0.005345, over 24056.00 frames. ], tot_loss[loss=0.06081, simple_loss=0.1134, pruned_loss=0.004105, over 3989956.58 frames. ], batch size: 361, lr: 6.67e-03, grad_scale: 32.0 2024-03-16 05:42:12,343 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.out_combiner.scale_min, batch_count=80596.66666666667, ans=0.2 2024-03-16 05:42:29,935 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer1.prob, batch_count=80630.0, ans=0.125 2024-03-16 05:42:46,650 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward1.out_proj.dropout_p, batch_count=80696.66666666667, ans=0.1 2024-03-16 05:43:12,046 INFO [train_char.py:689] (0/2) Epoch 48, batch 400, loss[loss=0.06298, simple_loss=0.1185, pruned_loss=0.003711, over 24323.00 frames. ], tot_loss[loss=0.06129, simple_loss=0.1143, pruned_loss=0.004117, over 4177512.75 frames. ], batch size: 172, lr: 6.67e-03, grad_scale: 32.0 2024-03-16 05:43:18,383 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.522e+01 8.124e+01 1.041e+02 1.434e+02 2.698e+02, threshold=2.082e+02, percent-clipped=12.0 2024-03-16 05:43:53,013 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.convnext.out_balancer.prob, batch_count=80863.33333333333, ans=0.125 2024-03-16 05:43:53,086 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.balancer2.prob, batch_count=80863.33333333333, ans=0.125 2024-03-16 05:43:53,191 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.balancer2.prob, batch_count=80863.33333333333, ans=0.125 2024-03-16 05:43:58,060 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.bypass.scale_min, batch_count=80863.33333333333, ans=0.2 2024-03-16 05:44:05,630 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module1.balancer2.prob, batch_count=80896.66666666667, ans=0.125 2024-03-16 05:44:18,024 INFO [train_char.py:689] (0/2) Epoch 48, batch 450, loss[loss=0.06472, simple_loss=0.1213, pruned_loss=0.004088, over 24332.00 frames. ], tot_loss[loss=0.06193, simple_loss=0.1155, pruned_loss=0.004162, over 4324278.60 frames. ], batch size: 180, lr: 6.66e-03, grad_scale: 32.0 2024-03-16 05:44:29,324 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass.scale_min, batch_count=80963.33333333333, ans=0.2 2024-03-16 05:44:35,693 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.ff2_skip_rate, batch_count=80963.33333333333, ans=0.0 2024-03-16 05:44:42,980 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass.scale_min, batch_count=80996.66666666667, ans=0.2 2024-03-16 05:44:56,707 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer1.prob, batch_count=81030.0, ans=0.125 2024-03-16 05:45:12,446 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward3.hidden_balancer.prob, batch_count=81063.33333333333, ans=0.125 2024-03-16 05:45:22,253 INFO [train_char.py:689] (0/2) Epoch 48, batch 500, loss[loss=0.06543, simple_loss=0.1248, pruned_loss=0.003017, over 24036.00 frames. ], tot_loss[loss=0.06272, simple_loss=0.117, pruned_loss=0.004205, over 4435532.08 frames. ], batch size: 199, lr: 6.66e-03, grad_scale: 32.0 2024-03-16 05:45:28,444 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.489e+01 7.774e+01 9.184e+01 1.191e+02 2.311e+02, threshold=1.837e+02, percent-clipped=1.0 2024-03-16 05:45:31,007 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-48.pt 2024-03-16 05:46:21,983 INFO [train_char.py:689] (0/2) Epoch 49, batch 0, loss[loss=0.06522, simple_loss=0.1236, pruned_loss=0.003417, over 24358.00 frames. ], tot_loss[loss=0.06522, simple_loss=0.1236, pruned_loss=0.003417, over 24358.00 frames. ], batch size: 180, lr: 6.59e-03, grad_scale: 32.0 2024-03-16 05:46:21,984 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 05:46:35,550 INFO [train_char.py:721] (0/2) Epoch 49, validation: loss=0.05711, simple_loss=0.1065, pruned_loss=0.003837, over 657665.00 frames. 2024-03-16 05:46:35,551 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 05:46:37,927 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=4.89 vs. limit=6.0 2024-03-16 05:46:45,217 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer1.prob, batch_count=81120.0, ans=0.125 2024-03-16 05:46:46,959 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=384, metric=17.90 vs. limit=22.5 2024-03-16 05:46:47,996 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.balancer1.prob, batch_count=81153.33333333333, ans=0.125 2024-03-16 05:47:10,568 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.conv_module1.balancer1.prob, batch_count=81186.66666666667, ans=0.125 2024-03-16 05:47:18,426 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.conv_module2.balancer2.min_abs, batch_count=81220.0, ans=0.5 2024-03-16 05:47:42,423 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.bypass_mid.scale_min, batch_count=81253.33333333333, ans=0.2 2024-03-16 05:47:47,396 INFO [train_char.py:689] (0/2) Epoch 49, batch 50, loss[loss=0.06173, simple_loss=0.1173, pruned_loss=0.003104, over 24255.00 frames. ], tot_loss[loss=0.05668, simple_loss=0.1052, pruned_loss=0.004061, over 1084969.11 frames. ], batch size: 296, lr: 6.58e-03, grad_scale: 32.0 2024-03-16 05:47:52,055 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=6.52 vs. limit=10.0 2024-03-16 05:48:15,965 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module1.balancer2.min_positive, batch_count=81320.0, ans=0.05 2024-03-16 05:48:19,082 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.feed_forward2.out_whiten, num_groups=1, num_channels=512, metric=10.97 vs. limit=15.0 2024-03-16 05:48:56,090 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.069e+01 7.760e+01 9.968e+01 1.337e+02 2.730e+02, threshold=1.994e+02, percent-clipped=13.0 2024-03-16 05:48:56,400 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.ff2_skip_rate, batch_count=81420.0, ans=0.0 2024-03-16 05:48:57,030 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=256, metric=22.90 vs. limit=22.5 2024-03-16 05:48:58,097 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=8.21 vs. limit=15.0 2024-03-16 05:48:58,611 INFO [train_char.py:689] (0/2) Epoch 49, batch 100, loss[loss=0.07175, simple_loss=0.1285, pruned_loss=0.007519, over 24141.00 frames. ], tot_loss[loss=0.05853, simple_loss=0.1091, pruned_loss=0.003995, over 1913562.41 frames. ], batch size: 279, lr: 6.57e-03, grad_scale: 32.0 2024-03-16 05:49:00,236 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=81453.33333333333, ans=0.1 2024-03-16 05:49:00,650 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.feed_forward2.out_whiten.whitening_limit, batch_count=81453.33333333333, ans=15.0 2024-03-16 05:49:28,533 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward2.hidden_balancer.prob, batch_count=81520.0, ans=0.125 2024-03-16 05:50:03,530 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn2.whiten, num_groups=1, num_channels=256, metric=15.97 vs. limit=22.5 2024-03-16 05:50:06,843 INFO [train_char.py:689] (0/2) Epoch 49, batch 150, loss[loss=0.0525, simple_loss=0.0968, pruned_loss=0.004104, over 24013.00 frames. ], tot_loss[loss=0.05841, simple_loss=0.1088, pruned_loss=0.004003, over 2558506.10 frames. ], batch size: 408, lr: 6.57e-03, grad_scale: 32.0 2024-03-16 05:51:00,315 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.conv_module1.whiten, num_groups=1, num_channels=512, metric=4.00 vs. limit=15.0 2024-03-16 05:51:08,309 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.911e+01 8.219e+01 1.077e+02 1.581e+02 3.213e+02, threshold=2.153e+02, percent-clipped=13.0 2024-03-16 05:51:08,590 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module2.balancer2.prob, batch_count=81753.33333333333, ans=0.125 2024-03-16 05:51:15,475 INFO [train_char.py:689] (0/2) Epoch 49, batch 200, loss[loss=0.05846, simple_loss=0.1115, pruned_loss=0.002727, over 24449.00 frames. ], tot_loss[loss=0.05811, simple_loss=0.1084, pruned_loss=0.003892, over 3061091.95 frames. ], batch size: 165, lr: 6.56e-03, grad_scale: 32.0 2024-03-16 05:51:15,776 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.bypass.skip_rate, batch_count=81786.66666666667, ans=0.09899494936611666 2024-03-16 05:51:20,430 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.whiten, num_groups=1, num_channels=192, metric=3.52 vs. limit=12.0 2024-03-16 05:51:20,939 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.balancer2.prob, batch_count=81786.66666666667, ans=0.125 2024-03-16 05:51:32,737 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.whiten, num_groups=1, num_channels=256, metric=2.15 vs. limit=12.0 2024-03-16 05:51:46,355 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module1.balancer1.prob, batch_count=81853.33333333333, ans=0.125 2024-03-16 05:52:08,054 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=81920.0, ans=0.125 2024-03-16 05:52:19,265 INFO [train_char.py:689] (0/2) Epoch 49, batch 250, loss[loss=0.07144, simple_loss=0.1332, pruned_loss=0.004842, over 24133.00 frames. ], tot_loss[loss=0.0595, simple_loss=0.111, pruned_loss=0.003984, over 3451859.66 frames. ], batch size: 251, lr: 6.55e-03, grad_scale: 32.0 2024-03-16 05:52:30,856 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_module1.balancer1.prob, batch_count=81986.66666666667, ans=0.125 2024-03-16 05:52:48,190 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_whiten, num_groups=1, num_channels=512, metric=10.39 vs. limit=15.0 2024-03-16 05:52:48,971 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.bypass_mid.scale_min, batch_count=82020.0, ans=0.2 2024-03-16 05:52:57,264 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_skip_rate, batch_count=82020.0, ans=0.0 2024-03-16 05:53:15,001 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=82086.66666666667, ans=0.1 2024-03-16 05:53:16,325 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.conv_module1.balancer1.prob, batch_count=82086.66666666667, ans=0.125 2024-03-16 05:53:23,609 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 4.621e+01 7.878e+01 9.786e+01 1.417e+02 2.568e+02, threshold=1.957e+02, percent-clipped=3.0 2024-03-16 05:53:26,171 INFO [train_char.py:689] (0/2) Epoch 49, batch 300, loss[loss=0.05882, simple_loss=0.11, pruned_loss=0.003823, over 24219.00 frames. ], tot_loss[loss=0.05986, simple_loss=0.1118, pruned_loss=0.003951, over 3758312.59 frames. ], batch size: 328, lr: 6.55e-03, grad_scale: 32.0 2024-03-16 05:53:43,426 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.conv_module1.whiten, num_groups=1, num_channels=384, metric=3.55 vs. limit=15.0 2024-03-16 05:54:15,777 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=288, metric=5.14 vs. limit=10.0 2024-03-16 05:54:29,309 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.bypass.skip_rate, batch_count=82253.33333333333, ans=0.09899494936611666 2024-03-16 05:54:31,602 INFO [train_char.py:689] (0/2) Epoch 49, batch 350, loss[loss=0.05192, simple_loss=0.09242, pruned_loss=0.005715, over 23957.00 frames. ], tot_loss[loss=0.0601, simple_loss=0.1121, pruned_loss=0.004047, over 3992118.04 frames. ], batch size: 407, lr: 6.54e-03, grad_scale: 32.0 2024-03-16 05:54:35,731 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=82286.66666666667, ans=0.1 2024-03-16 05:54:41,866 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.feed_forward1.out_proj.dropout_p, batch_count=82286.66666666667, ans=0.1 2024-03-16 05:54:51,743 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.4.encoder.layers.2.self_attn_weights, loss-sum=0.000e+00 2024-03-16 05:55:11,893 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=82386.66666666667, ans=0.1 2024-03-16 05:55:25,140 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=256, metric=5.72 vs. limit=15.0 2024-03-16 05:55:36,090 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.473e+01 7.996e+01 9.576e+01 1.213e+02 2.192e+02, threshold=1.915e+02, percent-clipped=3.0 2024-03-16 05:55:38,683 INFO [train_char.py:689] (0/2) Epoch 49, batch 400, loss[loss=0.058, simple_loss=0.1093, pruned_loss=0.003346, over 24412.00 frames. ], tot_loss[loss=0.06093, simple_loss=0.1137, pruned_loss=0.004099, over 4171513.65 frames. ], batch size: 158, lr: 6.54e-03, grad_scale: 32.0 2024-03-16 05:55:44,457 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.1.feed_forward2.out_whiten, num_groups=1, num_channels=384, metric=11.98 vs. limit=15.0 2024-03-16 05:56:01,192 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer2.prob, batch_count=82486.66666666667, ans=0.125 2024-03-16 05:56:01,271 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.hidden_balancer.prob, batch_count=82486.66666666667, ans=0.125 2024-03-16 05:56:18,997 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_skip_rate, batch_count=82553.33333333333, ans=0.0 2024-03-16 05:56:35,307 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.ff3_skip_rate, batch_count=82586.66666666667, ans=0.0 2024-03-16 05:56:42,691 INFO [train_char.py:689] (0/2) Epoch 49, batch 450, loss[loss=0.05727, simple_loss=0.1045, pruned_loss=0.004996, over 24260.00 frames. ], tot_loss[loss=0.06144, simple_loss=0.1147, pruned_loss=0.004093, over 4320204.68 frames. ], batch size: 328, lr: 6.53e-03, grad_scale: 32.0 2024-03-16 05:57:01,599 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_skip_rate, batch_count=82653.33333333333, ans=0.0 2024-03-16 05:57:05,229 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.balancer2.prob, batch_count=82653.33333333333, ans=0.125 2024-03-16 05:57:09,783 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.balancer2.prob, batch_count=82686.66666666667, ans=0.125 2024-03-16 05:57:19,556 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.out_combiner.scale_min, batch_count=82686.66666666667, ans=0.2 2024-03-16 05:57:27,100 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.ff2_skip_rate, batch_count=82720.0, ans=0.0 2024-03-16 05:57:29,696 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.attention_skip_rate, batch_count=82720.0, ans=0.0 2024-03-16 05:57:33,450 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder_embed.convnext.hidden_balancer.prob, batch_count=82753.33333333333, ans=0.125 2024-03-16 05:57:45,188 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.454e+01 7.597e+01 9.286e+01 1.257e+02 1.873e+02, threshold=1.857e+02, percent-clipped=0.0 2024-03-16 05:57:47,732 INFO [train_char.py:689] (0/2) Epoch 49, batch 500, loss[loss=0.06759, simple_loss=0.1281, pruned_loss=0.003549, over 24045.00 frames. ], tot_loss[loss=0.06232, simple_loss=0.1164, pruned_loss=0.004107, over 4434633.31 frames. ], batch size: 250, lr: 6.52e-03, grad_scale: 32.0 2024-03-16 05:57:57,047 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-49.pt 2024-03-16 05:58:46,445 INFO [train_char.py:689] (0/2) Epoch 50, batch 0, loss[loss=0.05164, simple_loss=0.09345, pruned_loss=0.004918, over 23954.00 frames. ], tot_loss[loss=0.05164, simple_loss=0.09345, pruned_loss=0.004918, over 23954.00 frames. ], batch size: 407, lr: 6.46e-03, grad_scale: 32.0 2024-03-16 05:58:46,446 INFO [train_char.py:712] (0/2) Computing validation loss 2024-03-16 05:58:55,693 INFO [zipformer.py:1858] (0/2) name=encoder.encoders.5.encoder.layers.0.self_attn_weights, attn_weights_entropy = tensor([4.5270, 3.9463, 4.2122, 4.0785], device='cuda:0') 2024-03-16 05:59:03,820 INFO [train_char.py:721] (0/2) Epoch 50, validation: loss=0.05626, simple_loss=0.1051, pruned_loss=0.003704, over 657665.00 frames. 2024-03-16 05:59:03,821 INFO [train_char.py:722] (0/2) Maximum memory allocated so far is 25229MB 2024-03-16 05:59:25,947 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=12.42 vs. limit=15.0 2024-03-16 05:59:35,941 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.5.encoder.layers.0.self_attn_weights, loss-sum=0.000e+00 2024-03-16 05:59:37,222 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.feed_forward1.out_proj.dropout_p, batch_count=82876.66666666667, ans=0.1 2024-03-16 05:59:48,308 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass.skip_rate, batch_count=82910.0, ans=0.07 2024-03-16 05:59:57,660 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.conv_skip_rate, batch_count=82943.33333333333, ans=0.0 2024-03-16 06:00:00,235 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward1.out_proj.dropout_p, batch_count=82943.33333333333, ans=0.1 2024-03-16 06:00:10,884 INFO [train_char.py:689] (0/2) Epoch 50, batch 50, loss[loss=0.0666, simple_loss=0.1269, pruned_loss=0.00313, over 24151.00 frames. ], tot_loss[loss=0.06012, simple_loss=0.1127, pruned_loss=0.003763, over 1083102.68 frames. ], batch size: 251, lr: 6.45e-03, grad_scale: 32.0 2024-03-16 06:00:16,896 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.3.self_attn_weights.whiten_keys, num_groups=8, num_channels=256, metric=4.67 vs. limit=6.0 2024-03-16 06:00:36,779 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.feed_forward3.out_whiten, num_groups=1, num_channels=256, metric=16.75 vs. limit=15.0 2024-03-16 06:01:12,105 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 4.807e+01 7.372e+01 1.027e+02 1.338e+02 2.470e+02, threshold=2.053e+02, percent-clipped=9.0 2024-03-16 06:01:23,366 INFO [train_char.py:689] (0/2) Epoch 50, batch 100, loss[loss=0.07073, simple_loss=0.134, pruned_loss=0.003725, over 24241.00 frames. ], tot_loss[loss=0.05879, simple_loss=0.1098, pruned_loss=0.003866, over 1904056.68 frames. ], batch size: 212, lr: 6.45e-03, grad_scale: 32.0 2024-03-16 06:01:54,648 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.bypass_mid.scale_min, batch_count=83210.0, ans=0.2 2024-03-16 06:01:55,919 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.conv_module2.balancer1.prob, batch_count=83210.0, ans=0.125 2024-03-16 06:01:57,124 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward1.out_proj.dropout_p, batch_count=83210.0, ans=0.1 2024-03-16 06:01:57,158 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass_mid.scale_min, batch_count=83210.0, ans=0.2 2024-03-16 06:01:58,424 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.ff3_skip_rate, batch_count=83210.0, ans=0.0 2024-03-16 06:02:09,950 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.conv_module1.balancer2.prob, batch_count=83243.33333333333, ans=0.125 2024-03-16 06:02:17,561 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.bypass_mid.scale_min, batch_count=83276.66666666667, ans=0.2 2024-03-16 06:02:21,493 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module2.balancer1.prob, batch_count=83276.66666666667, ans=0.125 2024-03-16 06:02:22,725 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.conv_skip_rate, batch_count=83276.66666666667, ans=0.0 2024-03-16 06:02:27,573 INFO [train_char.py:689] (0/2) Epoch 50, batch 150, loss[loss=0.06733, simple_loss=0.1279, pruned_loss=0.003388, over 24048.00 frames. ], tot_loss[loss=0.05875, simple_loss=0.1101, pruned_loss=0.003683, over 2542552.51 frames. ], batch size: 236, lr: 6.44e-03, grad_scale: 32.0 2024-03-16 06:02:29,012 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module1.balancer2.min_abs, batch_count=83310.0, ans=0.5 2024-03-16 06:02:29,704 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.0.layers.1.self_attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=4.76 vs. limit=6.0 2024-03-16 06:02:42,111 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.0.bypass_mid.scale_min, batch_count=83343.33333333333, ans=0.2 2024-03-16 06:03:24,165 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 4.731e+01 7.051e+01 1.024e+02 1.368e+02 2.363e+02, threshold=2.048e+02, percent-clipped=8.0 2024-03-16 06:03:35,867 INFO [train_char.py:689] (0/2) Epoch 50, batch 200, loss[loss=0.05173, simple_loss=0.09402, pruned_loss=0.004725, over 23787.00 frames. ], tot_loss[loss=0.05929, simple_loss=0.111, pruned_loss=0.003771, over 3044727.11 frames. ], batch size: 439, lr: 6.43e-03, grad_scale: 32.0 2024-03-16 06:03:58,204 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.feed_forward2.hidden_balancer.prob, batch_count=83510.0, ans=0.125 2024-03-16 06:03:58,722 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.feed_forward2.out_whiten, num_groups=1, num_channels=512, metric=12.18 vs. limit=15.0 2024-03-16 06:04:42,926 INFO [train_char.py:689] (0/2) Epoch 50, batch 250, loss[loss=0.06139, simple_loss=0.1079, pruned_loss=0.007463, over 24144.00 frames. ], tot_loss[loss=0.0599, simple_loss=0.1121, pruned_loss=0.003843, over 3440162.15 frames. ], batch size: 362, lr: 6.43e-03, grad_scale: 32.0 2024-03-16 06:04:44,813 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.self_attn1.whiten, num_groups=1, num_channels=256, metric=13.79 vs. limit=22.5 2024-03-16 06:05:04,642 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.nonlin_attention.balancer.prob, batch_count=83676.66666666667, ans=0.125 2024-03-16 06:05:06,221 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.feed_forward1.out_whiten, num_groups=1, num_channels=256, metric=5.06 vs. limit=15.0 2024-03-16 06:05:31,857 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=384, metric=12.41 vs. limit=15.0 2024-03-16 06:05:35,041 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.399e+01 7.830e+01 1.003e+02 1.417e+02 3.027e+02, threshold=2.006e+02, percent-clipped=7.0 2024-03-16 06:05:37,842 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.0.bypass_mid.scale_min, batch_count=83776.66666666667, ans=0.2 2024-03-16 06:05:37,965 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.out_combiner.scale_min, batch_count=83776.66666666667, ans=0.2 2024-03-16 06:05:41,718 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.conv_module2.balancer2.prob, batch_count=83776.66666666667, ans=0.125 2024-03-16 06:05:49,514 INFO [train_char.py:689] (0/2) Epoch 50, batch 300, loss[loss=0.05194, simple_loss=0.09741, pruned_loss=0.003231, over 23998.00 frames. ], tot_loss[loss=0.06012, simple_loss=0.1126, pruned_loss=0.003833, over 3743466.36 frames. ], batch size: 381, lr: 6.42e-03, grad_scale: 32.0 2024-03-16 06:05:56,204 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward3.hidden_balancer.prob, batch_count=83810.0, ans=0.125 2024-03-16 06:06:18,098 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module1.balancer2.min_positive, batch_count=83876.66666666667, ans=0.05 2024-03-16 06:06:18,826 INFO [scaling.py:1023] (0/2) Whitening: name=encoder_embed.out_whiten, num_groups=1, num_channels=192, metric=6.58 vs. limit=8.0 2024-03-16 06:06:19,381 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.conv_module1.balancer1.max_abs, batch_count=83876.66666666667, ans=10.0 2024-03-16 06:06:26,906 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.0.self_attn_weights.pos_emb_skip_rate, batch_count=83910.0, ans=0.0 2024-03-16 06:06:29,986 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.2.feed_forward3.out_whiten, num_groups=1, num_channels=512, metric=6.71 vs. limit=15.0 2024-03-16 06:06:32,531 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn2.whiten, num_groups=1, num_channels=512, metric=19.95 vs. limit=22.5 2024-03-16 06:06:45,823 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.nonlin_attention.balancer.prob, batch_count=83943.33333333333, ans=0.125 2024-03-16 06:06:48,388 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=83943.33333333333, ans=0.1 2024-03-16 06:06:55,889 INFO [train_char.py:689] (0/2) Epoch 50, batch 350, loss[loss=0.06697, simple_loss=0.1248, pruned_loss=0.00457, over 24123.00 frames. ], tot_loss[loss=0.06055, simple_loss=0.1134, pruned_loss=0.003839, over 3986146.87 frames. ], batch size: 279, lr: 6.42e-03, grad_scale: 32.0 2024-03-16 06:07:01,200 INFO [scaling.py:1119] (0/2) WithLoss: name=encoder.encoders.3.encoder.layers.3.self_attn_weights, loss-sum=0.000e+00 2024-03-16 06:07:10,167 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.0.self_attn1.whiten, num_groups=1, num_channels=256, metric=12.93 vs. limit=22.5 2024-03-16 06:07:34,014 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=84076.66666666667, ans=0.125 2024-03-16 06:07:40,988 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.bypass_mid.scale_min, batch_count=84076.66666666667, ans=0.2 2024-03-16 06:07:41,183 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.5.encoder.layers.1.whiten, num_groups=1, num_channels=256, metric=2.81 vs. limit=12.0 2024-03-16 06:07:45,346 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.1.encoder.layers.1.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=5.51 vs. limit=10.0 2024-03-16 06:07:49,371 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 4.894e+01 8.440e+01 1.085e+02 1.456e+02 2.350e+02, threshold=2.171e+02, percent-clipped=3.0 2024-03-16 06:07:50,030 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.4.encoder.layers.1.conv_module1.whiten, num_groups=1, num_channels=384, metric=2.93 vs. limit=15.0 2024-03-16 06:08:00,935 INFO [train_char.py:689] (0/2) Epoch 50, batch 400, loss[loss=0.0681, simple_loss=0.1307, pruned_loss=0.002736, over 24250.00 frames. ], tot_loss[loss=0.06096, simple_loss=0.1142, pruned_loss=0.003874, over 4176951.76 frames. ], batch size: 267, lr: 6.41e-03, grad_scale: 32.0 2024-03-16 06:08:01,290 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=84143.33333333333, ans=0.125 2024-03-16 06:08:01,765 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.3.encoder.layers.0.self_attn_weights.whiten_keys, num_groups=8, num_channels=256, metric=5.72 vs. limit=6.0 2024-03-16 06:08:12,587 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.0.layers.1.feed_forward2.hidden_balancer.prob, batch_count=84176.66666666667, ans=0.125 2024-03-16 06:08:53,557 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.2.bypass.skip_rate, batch_count=84276.66666666667, ans=0.09899494936611666 2024-03-16 06:08:57,693 INFO [scaling.py:1023] (0/2) Whitening: name=encoder.encoders.2.encoder.layers.0.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=11.18 vs. limit=15.0 2024-03-16 06:09:05,562 INFO [train_char.py:689] (0/2) Epoch 50, batch 450, loss[loss=0.06569, simple_loss=0.1234, pruned_loss=0.004012, over 24079.00 frames. ], tot_loss[loss=0.0615, simple_loss=0.115, pruned_loss=0.003975, over 4322506.20 frames. ], batch size: 199, lr: 6.40e-03, grad_scale: 32.0 2024-03-16 06:09:10,545 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.2.encoder.layers.0.feed_forward2.hidden_balancer.prob, batch_count=84310.0, ans=0.125 2024-03-16 06:09:29,966 INFO [scaling.py:214] (0/2) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module2.balancer1.prob, batch_count=84376.66666666667, ans=0.125 2024-03-16 06:09:57,526 WARNING [optim.py:487] (0/2) Clipping_scale=2.0, grad-norm quartiles 5.533e+01 7.607e+01 9.613e+01 1.214e+02 2.346e+02, threshold=1.923e+02, percent-clipped=1.0 2024-03-16 06:10:09,394 INFO [train_char.py:689] (0/2) Epoch 50, batch 500, loss[loss=0.06923, simple_loss=0.1315, pruned_loss=0.00347, over 24144.00 frames. ], tot_loss[loss=0.06243, simple_loss=0.1168, pruned_loss=0.004012, over 4435481.84 frames. ], batch size: 223, lr: 6.40e-03, grad_scale: 32.0 2024-03-16 06:10:18,344 INFO [checkpoint.py:75] (0/2) Saving checkpoint to zipformer/exp_val/epoch-50.pt 2024-03-16 06:10:22,986 INFO [train_char.py:1026] (0/2) Done!