Edit model card

Osiris_asr_model

This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 3.0600
  • Wer: 1.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 10
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 20
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • training_steps: 1000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
45.8209 50.0 50 21.0347 1.0182
10.2898 100.0 100 5.1552 1.0
5.7188 150.0 150 4.9140 1.0
5.3358 200.0 200 4.7650 1.0
5.1381 250.0 250 4.6797 1.0
4.9841 300.0 300 4.6168 1.0
4.9255 350.0 350 4.5741 1.0
4.8353 400.0 400 4.5321 1.0
4.7704 450.0 450 4.5100 1.0
4.6257 500.0 500 3.9382 1.0
3.8106 550.0 550 3.3939 1.0
3.5095 600.0 600 3.2887 1.0
3.3716 650.0 650 3.1967 1.0
3.3025 700.0 700 3.1539 1.0
3.2532 750.0 750 3.1477 1.0
3.2086 800.0 800 3.0984 1.0
3.1889 850.0 850 3.0857 1.0
3.162 900.0 900 3.0819 1.0
3.1411 950.0 950 3.0610 1.0
3.1397 1000.0 1000 3.0600 1.0

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
0
Safetensors
Model size
94.4M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for tiaTai/Osiris_asr_model

Finetuned
this model