KELONMYOSA's picture
Update README.md
ccd8444
|
raw
history blame
1.55 kB
metadata
license: apache-2.0
datasets:
  - KELONMYOSA/dusha_emotion_audio
language:
  - ru
pipeline_tag: audio-classification
metrics:
  - accuracy
widget:
  - example_title: Emotion - "Neurtal"
    src: >-
      https://huggingface.co/KELONMYOSA/wav2vec2-xls-r-300m-emotion-ru/resolve/main/neutral.mp3
  - example_title: Emotion - "Positive"
    src: >-
      https://huggingface.co/KELONMYOSA/wav2vec2-xls-r-300m-emotion-ru/resolve/main/positive.mp3
  - example_title: Emotion - "Angry"
    src: >-
      https://huggingface.co/KELONMYOSA/wav2vec2-xls-r-300m-emotion-ru/resolve/main/angry.mp3
  - example_title: Emotion - "Sad"
    src: >-
      https://huggingface.co/KELONMYOSA/wav2vec2-xls-r-300m-emotion-ru/resolve/main/sad.mp3
  - example_title: Emotion - "Other"
    src: >-
      https://huggingface.co/KELONMYOSA/wav2vec2-xls-r-300m-emotion-ru/resolve/main/other.mp3

Speech Emotion Recognition

The model is a fine-tuned version of facebook/wav2vec2-xls-r-300m for a Speech Emotion Recognition (SER) task.

The dataset used to fine-tune the original pre-trained model is the DUSHA dataset. The dataset consists of about 125 000 audio recordings in Russian with four basic emotions that usually appear in a dialog with a virtual assistant: Happiness (Positive), Sadness, Anger and Neutral emotion.

emotions = ['neutral', 'positive', 'angry', 'sad', 'other']

It achieves the following results:

  • Training Loss: 0.528700
  • Validation Loss: 0.349617
  • Accuracy: 0.901369