poonehmousavi commited on
Commit
ef51095
1 Parent(s): a16effb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -4,7 +4,7 @@ language:
4
  thumbnail: null
5
  tags:
6
  - automatic-speech-recognition
7
- - transducer
8
  - Attention
9
  - pytorch
10
  - speechbrain
@@ -14,7 +14,7 @@ datasets:
14
  metrics:
15
  - name: Test WER
16
  type: wer
17
- value: ' 17.58'
18
  ---
19
 
20
  <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
@@ -29,7 +29,7 @@ The performance of the model is the following:
29
 
30
  | Release | Test CER | Test WER | GPUs |
31
  |:-------------:|:--------------:|:--------------:| :--------:|
32
- | 15.08.23 | 7.61 | 17.58 | 1xV100 32GB |
33
 
34
  ## Credits
35
  The model is provided by [vitas.ai](https://www.vitas.ai/).
@@ -39,7 +39,7 @@ This ASR system is composed of 2 different but linked blocks:
39
 
40
  - Tokenizer (unigram) that transforms words into subword units and trained with
41
  the train transcriptions (train.tsv) of CommonVoice (en).
42
- - Transducers augment CTC by adding an autoregressive predictor and a join network.
43
 
44
  The system is trained with recordings sampled at 16kHz (single channel).
45
  The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
@@ -92,7 +92,7 @@ pip install -e .
92
  3. Run Training:
93
 
94
  ```
95
- cd recipes/CommonVoice/ASR/transducer
96
  python train.py hparams/train_fr.yaml --data_folder=your_data_folder
97
  ```
98
 
 
4
  thumbnail: null
5
  tags:
6
  - automatic-speech-recognition
7
+ - transformer
8
  - Attention
9
  - pytorch
10
  - speechbrain
 
14
  metrics:
15
  - name: Test WER
16
  type: wer
17
+ value: ' 16.00'
18
  ---
19
 
20
  <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
 
29
 
30
  | Release | Test CER | Test WER | GPUs |
31
  |:-------------:|:--------------:|:--------------:| :--------:|
32
+ | 15.08.23 | 4.20 | 16.00 | 1xV100 32GB |
33
 
34
  ## Credits
35
  The model is provided by [vitas.ai](https://www.vitas.ai/).
 
39
 
40
  - Tokenizer (unigram) that transforms words into subword units and trained with
41
  the train transcriptions (train.tsv) of CommonVoice (en).
42
+ - transformer augment CTC by adding an autoregressive predictor and a join network.
43
 
44
  The system is trained with recordings sampled at 16kHz (single channel).
45
  The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
 
92
  3. Run Training:
93
 
94
  ```
95
+ cd recipes/CommonVoice/ASR/transformer
96
  python train.py hparams/train_fr.yaml --data_folder=your_data_folder
97
  ```
98