KELONMYOSA's picture
Update README.md
383d0b8
|
raw
history blame
1.97 kB
metadata
license: apache-2.0
datasets:
  - KELONMYOSA/dusha_emotion_audio
language:
  - ru
pipeline_tag: audio-classification
metrics:
  - accuracy
widget:
  - example_title: Emotion - "Neurtal"
    src: >-
      https://huggingface.co/KELONMYOSA/wav2vec2-xls-r-300m-emotion-ru/resolve/main/neutral.mp3
  - example_title: Emotion - "Positive"
    src: >-
      https://huggingface.co/KELONMYOSA/wav2vec2-xls-r-300m-emotion-ru/resolve/main/positive.mp3
  - example_title: Emotion - "Angry"
    src: >-
      https://huggingface.co/KELONMYOSA/wav2vec2-xls-r-300m-emotion-ru/resolve/main/angry.mp3
  - example_title: Emotion - "Sad"
    src: >-
      https://huggingface.co/KELONMYOSA/wav2vec2-xls-r-300m-emotion-ru/resolve/main/sad.mp3
  - example_title: Emotion - "Other"
    src: >-
      https://huggingface.co/KELONMYOSA/wav2vec2-xls-r-300m-emotion-ru/resolve/main/other.mp3

Speech Emotion Recognition

The model is a fine-tuned version of facebook/wav2vec2-xls-r-300m for a Speech Emotion Recognition (SER) task.

The dataset used to fine-tune the original pre-trained model is the DUSHA dataset. The dataset consists of about 125 000 audio recordings in Russian with four basic emotions that usually appear in a dialog with a virtual assistant: Happiness (Positive), Sadness, Anger and Neutral emotion.

emotions = ['neutral', 'positive', 'angry', 'sad', 'other']

How to use

from transformers.pipelines import pipeline

pipe = pipeline(model="KELONMYOSA/wav2vec2-xls-r-300m-emotion-ru", trust_remote_code=True)

result = pipe("speech.wav")
print(result)
[{'label': 'neutral', 'score': 0.00318}, {'label': 'positive', 'score': 0.00376}, {'label': 'sad', 'score': 0.00145}, {'label': 'angry', 'score': 0.98984}, {'label': 'other', 'score': 0.00176}]

Evaluation

It achieves the following results:

  • Training Loss: 0.528700
  • Validation Loss: 0.349617
  • Accuracy: 0.901369