RicardoRei commited on
Commit
f3ab5f3
1 Parent(s): 0a1fb01

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -6
README.md CHANGED
@@ -36,7 +36,7 @@ tags:
36
 
37
  # Dataset Summary
38
 
39
- This dataset contains all MQM human annotations from previous [WMT Metrics shared tasks](https://wmt-metrics-task.github.io/) and the MQM annotations from [Experts, Errors, and Context](https://aclanthology.org/2021.tacl-1.87/).
40
 
41
  The data is organised into 8 columns:
42
  - lp: language pair
@@ -49,14 +49,13 @@ The data is organised into 8 columns:
49
  - domain: domain of the input text (e.g. news)
50
  - year: collection year
51
 
52
- You can also find the original data [here](https://github.com/google/wmt-mqm-human-evaluation). We recommend using the original repo if you are interested in annotation spans and not just the final score.
53
-
54
 
55
  ## Python usage:
56
 
57
  ```python
58
  from datasets import load_dataset
59
- dataset = load_dataset("RicardoRei/wmt-mqm-human-evaluation", split="train")
60
  ```
61
 
62
  There is no standard train/test split for this dataset but you can easily split it according to year, language pair or domain. E.g. :
@@ -69,5 +68,7 @@ data = dataset.filter(lambda example: example["year"] == 2022)
69
  data = dataset.filter(lambda example: example["lp"] == "en-de")
70
 
71
  # split by domain
72
- data = dataset.filter(lambda example: example["domain"] == "ted")
73
- ```
 
 
 
36
 
37
  # Dataset Summary
38
 
39
+ This dataset contains all DA human annotations from previous WMT News Translation shared tasks.
40
 
41
  The data is organised into 8 columns:
42
  - lp: language pair
 
49
  - domain: domain of the input text (e.g. news)
50
  - year: collection year
51
 
52
+ You can also find the original data for each year in the results section https://www.statmt.org/wmt{YEAR}/results.html e.g: for 2020 data: [https://www.statmt.org/wmt20/results.html](https://www.statmt.org/wmt20/results.html)
 
53
 
54
  ## Python usage:
55
 
56
  ```python
57
  from datasets import load_dataset
58
+ dataset = load_dataset("RicardoRei/wmt-da-human-evaluation", split="train")
59
  ```
60
 
61
  There is no standard train/test split for this dataset but you can easily split it according to year, language pair or domain. E.g. :
 
68
  data = dataset.filter(lambda example: example["lp"] == "en-de")
69
 
70
  # split by domain
71
+ data = dataset.filter(lambda example: example["domain"] == "news")
72
+ ```
73
+
74
+ Note that most data is from News domain.