whisper-large-v2-yodas-2
This model is a fine-tuned version of openai/whisper-large-v2 on the fleurs dataset. It achieves the following results on the evaluation set:
- Loss: 0.4643
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.8132 | 0.2473 | 1000 | 0.3183 |
0.7788 | 0.4947 | 2000 | 0.3530 |
0.7566 | 0.7420 | 3000 | 0.3373 |
0.7586 | 0.9894 | 4000 | 0.3444 |
0.5781 | 1.2367 | 5000 | 0.3332 |
0.5901 | 1.4840 | 6000 | 0.3637 |
0.5837 | 1.7314 | 7000 | 0.3439 |
0.5662 | 1.9787 | 8000 | 0.3573 |
0.3619 | 2.2261 | 9000 | 0.3648 |
0.3695 | 2.4734 | 10000 | 0.3754 |
0.3713 | 2.7208 | 11000 | 0.3572 |
0.3804 | 2.9681 | 12000 | 0.3732 |
0.2004 | 3.2154 | 13000 | 0.4276 |
0.1987 | 3.4628 | 14000 | 0.4003 |
0.2006 | 3.7101 | 15000 | 0.3896 |
0.2077 | 3.9575 | 16000 | 0.3951 |
0.0913 | 4.2048 | 17000 | 0.4249 |
0.0901 | 4.4521 | 18000 | 0.4335 |
0.0885 | 4.6995 | 19000 | 0.4430 |
0.0875 | 4.9468 | 20000 | 0.4345 |
0.032 | 5.1942 | 21000 | 0.4428 |
0.0345 | 5.4415 | 22000 | 0.4609 |
0.0343 | 5.6888 | 23000 | 0.4630 |
0.0324 | 5.9362 | 24000 | 0.4643 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
- Downloads last month
- 2
Model tree for tgrhn/whisper-large-v2-yodas-2
Base model
openai/whisper-large-v2
Finetuned
this model