File size: 1,630 Bytes
d12634e
db60e59
d12634e
 
 
 
 
db60e59
 
 
38525b1
d12634e
46e8f98
db60e59
 
d12634e
 
db60e59
 
f9543e1
1cbfe4d
78ccfc5
 
 
 
 
 
db60e59
ae02e05
 
db60e59
ae02e05
db60e59
 
ae02e05
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
license: apache-2.0
language_creators:
- expert-generated
task_categories:
- text-generation
tags:
- generative error correction
- large language model
- LLaMA
pretty_name: Robust HyPoradise
size_categories:
- 100K<n<1M
language:
- en
---

# HypothesesParadise
This repo releases the Robust HyPoradise dataset in paper "Large Language Models are Efficient Learners of Noise-Robust Speech Recognition."

NEW (Apr-18): We have released the training data, which follows the same format as test data. 
Considering the file size, the uploaded training data does not contain the speech features (vast size).
Alternatively, we have provided a script named `add_speech_feats_to_train_data.py` to generate them from raw speech (.wav).
You need to specify the raw speech path from utterance id in the script.
Here are the available speech data: [CHiME-4](https://entuedu-my.sharepoint.com/:f:/g/personal/yuchen005_e_ntu_edu_sg/EuLgMQbjrIJHk7dKPkjcDMIB4SYgXKKP8VBxyiZk3qgdgA),
[VB-DEMAND](https://datashare.ed.ac.uk/handle/10283/2791), [LS-FreeSound](https://github.com/archiki/Robust-E2E-ASR), [NOIZEUS](https://ecs.utdallas.edu/loizou/speech/noizeus/).

If you consider this work would be related or useful for your research, please kindly consider to cite the work in ICLR 2024. Thank you.

```bib
@inproceedings{hu2024large,
  title={Large Language Models are Efficient Learners of Noise-Robust Speech Recognition},
  author={Hu, Yuchen and Chen, Chen and Yang, Chao-Han Huck and Li, Ruizhe and Zhang, Chao and Chen, Pin-Yu and Chng, Eng Siong},
  booktitle={International Conference on Learning Representations},
  year={2024}
}
```