KELONMYOSA's picture
Update README.md
1fb48c7
|
raw
history blame
1.49 kB
---
license: apache-2.0
datasets:
- KELONMYOSA/dusha_emotion_audio
language:
- ru
pipeline_tag: audio-classification
metrics:
- accuracy
widget:
- example_title: Neurtal
src: https://huggingface.co/KELONMYOSA/wav2vec2-xls-r-300m-emotion-ru/resolve/main/neutral.mp3
- example_title: Positive
src: https://huggingface.co/KELONMYOSA/wav2vec2-xls-r-300m-emotion-ru/resolve/main/positive.mp3
- example_title: Angry
src: https://huggingface.co/KELONMYOSA/wav2vec2-xls-r-300m-emotion-ru/resolve/main/angry.mp3
- example_title: Sad
src: https://huggingface.co/KELONMYOSA/wav2vec2-xls-r-300m-emotion-ru/resolve/main/sad.mp3
- example_title: Other
src: https://huggingface.co/KELONMYOSA/wav2vec2-xls-r-300m-emotion-ru/resolve/main/other.mp3
---
# Speech Emotion Recognition
The model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for a Speech Emotion Recognition (SER) task.
The dataset used to fine-tune the original pre-trained model is the [DUSHA dataset](https://huggingface.co/datasets/KELONMYOSA/dusha_emotion_audio). The dataset consists of about 125 000 audio recordings in Russian with four basic emotions that usually appear in a dialog with a virtual assistant: Happiness (Positive), Sadness, Anger and Neutral emotion.
```python
emotions = ['neutral', 'positive', 'angry', 'sad', 'other']
```
It achieves the following results:
- Training Loss: 0.528700
- Validation Loss: 0.349617
- Accuracy: 0.901369