Masioki commited on
Commit
fca4f90
1 Parent(s): 5144da6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -12
README.md CHANGED
@@ -3,7 +3,25 @@ tags:
3
  - generated_from_trainer
4
  model-index:
5
  - name: prosody_gttbsc_distilbert-uncased-best
6
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
8
 
9
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -11,26 +29,26 @@ should probably proofread and complete it, then remove this comment. -->
11
 
12
  # prosody_gttbsc_distilbert-uncased-best
13
 
14
- This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
 
16
  ## Model description
17
 
18
- More information needed
 
 
 
19
 
20
- ## Intended uses & limitations
21
-
22
- More information needed
23
 
24
  ## Training and evaluation data
25
 
26
- More information needed
 
27
 
28
- ## Training procedure
29
 
30
  ### Training hyperparameters
31
 
32
  The following hyperparameters were used during training:
33
- - learning_rate: 0.000435261587050942
34
  - train_batch_size: 2
35
  - eval_batch_size: 2
36
  - seed: 42
@@ -41,9 +59,6 @@ The following hyperparameters were used during training:
41
  - num_epochs: 20
42
  - mixed_precision_training: Native AMP
43
 
44
- ### Training results
45
-
46
-
47
 
48
  ### Framework versions
49
 
 
3
  - generated_from_trainer
4
  model-index:
5
  - name: prosody_gttbsc_distilbert-uncased-best
6
+ results:
7
+ - task:
8
+ type: dialogue act classification
9
+ dataset:
10
+ name: asapp/slue-phase-2
11
+ type: hvb
12
+ metrics:
13
+ - name: F1 macro E2E
14
+ type: F1 macro
15
+ value: 66.43
16
+ - name: F1 macro GT
17
+ type: F1 macro
18
+ value: 72.70
19
+ datasets:
20
+ - asapp/slue-phase-2
21
+ language:
22
+ - en
23
+ metrics:
24
+ - f1-macro
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
29
 
30
  # prosody_gttbsc_distilbert-uncased-best
31
 
32
+ Ground truth text with prosody encoding cross attention multi-label DAC
33
 
34
  ## Model description
35
 
36
+ Prosody encoder: 2 layer transformer encoder with initial dense projection
37
+ Backbone: [DistilBert uncased](https://huggingface.co/distilbert/distilbert-base-uncased)
38
+ Pooling: Self attention
39
+ Multi-label classification head: 2 dense layers with two dropouts 0.3 and Tanh activation inbetween
40
 
 
 
 
41
 
42
  ## Training and evaluation data
43
 
44
+ Trained on ground truth [slue-phase-2 hvb](https://huggingface.co/datasets/asapp/slue-phase-2).
45
+ Evaluated on ground truth and normalized [Whisper small](https://huggingface.co/openai/whisper-small) transcripts.
46
 
 
47
 
48
  ### Training hyperparameters
49
 
50
  The following hyperparameters were used during training:
51
+ - learning_rate: 0.0004
52
  - train_batch_size: 2
53
  - eval_batch_size: 2
54
  - seed: 42
 
59
  - num_epochs: 20
60
  - mixed_precision_training: Native AMP
61
 
 
 
 
62
 
63
  ### Framework versions
64