KELONMYOSA commited on
Commit
383d0b8
1 Parent(s): ccd8444

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -0
README.md CHANGED
@@ -30,6 +30,22 @@ The dataset used to fine-tune the original pre-trained model is the [DUSHA datas
30
  emotions = ['neutral', 'positive', 'angry', 'sad', 'other']
31
  ```
32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  It achieves the following results:
34
  - Training Loss: 0.528700
35
  - Validation Loss: 0.349617
 
30
  emotions = ['neutral', 'positive', 'angry', 'sad', 'other']
31
  ```
32
 
33
+ # How to use
34
+
35
+ ```python
36
+ from transformers.pipelines import pipeline
37
+
38
+ pipe = pipeline(model="KELONMYOSA/wav2vec2-xls-r-300m-emotion-ru", trust_remote_code=True)
39
+
40
+ result = pipe("speech.wav")
41
+ print(result)
42
+ ```
43
+ ~~~
44
+ [{'label': 'neutral', 'score': 0.00318}, {'label': 'positive', 'score': 0.00376}, {'label': 'sad', 'score': 0.00145}, {'label': 'angry', 'score': 0.98984}, {'label': 'other', 'score': 0.00176}]
45
+ ~~~
46
+
47
+ # Evaluation
48
+
49
  It achieves the following results:
50
  - Training Loss: 0.528700
51
  - Validation Loss: 0.349617