leonardvorbeck commited on
Commit
d285f7f
1 Parent(s): 9a7b850

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -13
README.md CHANGED
@@ -1,23 +1,36 @@
1
-
2
- 1
3
  ---
4
- 2
5
  language: en
6
- 3
7
  datasets:
8
- 4
9
  - libri_light
10
- 5
11
  - common_voice
12
- 6
13
  - switchboard
14
- 7
15
  - fisher
16
- 8
17
  tags:
18
- 9
19
  - speech
20
- 10
21
  license: apache-2.0
22
- 11
23
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  language: en
 
3
  datasets:
 
4
  - libri_light
 
5
  - common_voice
 
6
  - switchboard
 
7
  - fisher
 
8
  tags:
 
9
  - speech
 
10
  license: apache-2.0
11
+ ---
12
+
13
+ # Wav2Vec2-Large-Robust
14
+
15
+ [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
16
+
17
+ The base model pretrained on 16kHz sampled speech audio.
18
+ Speech datasets from multiple domains were used to pretrain the model:
19
+ - [Libri-Light](https://github.com/facebookresearch/libri-light): open-source audio books from the LibriVox project; clean, read-out audio data
20
+ - [CommonVoice](https://huggingface.co/datasets/common_voice): crowd-source collected audio data; read-out text snippets
21
+ - [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62): telephone speech corpus; noisy telephone data
22
+ - [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19): conversational telephone speech; noisy telephone data
23
+
24
+ When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information.
25
+
26
+ [Paper Robust Wav2Vec2](https://arxiv.org/abs/2104.01027)
27
+
28
+ Authors: Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, Michael Auli
29
+
30
+ **Abstract**
31
+ Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66%-73%. This has obvious practical implications since it is much easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at this https URL.
32
+ The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
33
+
34
+ # Usage
35
+
36
+ See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model.