system HF staff commited on
Commit
7049299
0 Parent(s):

Update files from the datasets library (from 1.16.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.16.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: IndoNLI
3
+ annotations_creators:
4
+ - expert-generated
5
+ - crowdsourced
6
+ language_creators:
7
+ - expert-generated
8
+ languages:
9
+ - id
10
+ licenses:
11
+ - cc-by-sa-4-0
12
+ multilinguality:
13
+ - monolingual
14
+ size_categories:
15
+ - 10K<n<100K
16
+ source_datasets:
17
+ - original
18
+ task_categories:
19
+ - text-classification
20
+ task_ids:
21
+ - natural-language-inference
22
+ paperswithcode_id: indonli
23
+ ---
24
+
25
+ # Dataset Card for IndoNLI
26
+
27
+ ## Table of Contents
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+ - [Contributions](#contributions)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Repository:** [GitHub](https://github.com/ir-nlp-csui/indonli)
54
+ - **Paper:** [EMNLP 2021](https://aclanthology.org/2021.emnlp-main.821/)
55
+ - **Point of Contact:** [GitHub](https://github.com/ir-nlp-csui/indonli)
56
+
57
+ ### Dataset Summary
58
+
59
+ IndoNLI is the first human-elicited Natural Language Inference (NLI) dataset for Indonesian.
60
+ IndoNLI is annotated by both crowd workers and experts. The expert-annotated data is used exclusively as a test set. It is designed to provide a challenging test-bed for Indonesian NLI by explicitly incorporating various linguistic phenomena such as numerical reasoning, structural changes, idioms, or temporal and spatial reasoning.
61
+
62
+ ### Supported Tasks and Leaderboards
63
+
64
+ - Natural Language Inference for Indonesian
65
+
66
+ ### Languages
67
+
68
+ Indonesian
69
+
70
+ ## Dataset Structure
71
+
72
+ ### Data Instances
73
+
74
+ An example of `train` looks as follows.
75
+
76
+ ```
77
+ {
78
+ "premise": "Keindahan alam yang terdapat di Gunung Batu Jonggol ini dapat Anda manfaatkan sebagai objek fotografi yang cantik.",
79
+ "hypothesis": "Keindahan alam tidak dapat difoto.",
80
+ "label": 2
81
+ }
82
+ ```
83
+ ### Data Fields
84
+
85
+ The data fields are:
86
+ - `premise`: a `string` feature
87
+ - `hypothesis`: a `string` feature
88
+ - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
89
+
90
+ ### Data Splits
91
+
92
+ The data is split across `train`, `valid`, `test_lay`, and `test_expert`.
93
+
94
+ `test_expert` is written by expert annotators, whereas the rest are written by lay annotators.
95
+
96
+ | split | # examples |
97
+ |----------|-------:|
98
+ |train| 10330|
99
+ |valid| 2197|
100
+ |test_lay| 2201|
101
+ |test_expert| 2984|
102
+
103
+ A small subset of `test_expert` is used as a diasnostic tool. For more info, please visit https://github.com/ir-nlp-csui/indonli
104
+
105
+
106
+
107
+ ## Dataset Creation
108
+
109
+ ### Curation Rationale
110
+
111
+ Indonesian NLP is considered under-resourced. Up until now, there is no publicly available human-annotated NLI dataset for Indonesian.
112
+
113
+ ### Source Data
114
+
115
+ #### Initial Data Collection and Normalization
116
+
117
+ The premise were collected from Indonesian Wikipedia and from other public Indonesian dataset: Indonesian PUD and GSD treebanks provided by the [Universal Dependencies 2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) and [IndoSum](https://github.com/kata-ai/indosum)
118
+
119
+ The hypothesis were written by annotators.
120
+
121
+ #### Who are the source language producers?
122
+
123
+ The data was produced by humans.
124
+
125
+ ### Annotations
126
+
127
+ #### Annotation process
128
+
129
+ We start by writing the hypothesis, given the premise and the target label. Then, we ask 2 different independent annotators to predict the label, given the premise and hypothesis. If all 3 (the original hypothesis + 2 independent annotators) agree with the label, then the annotation process ends for that sample. Otherwise, we incrementally ask additional annotator until 3 annotators agree with the label. If there's no majority concensus after 5 annotations, the sample is removed.
130
+
131
+ #### Who are the annotators?
132
+
133
+ Lay annotators were computer science students, and expert annotators were NLP scientists with 7+ years research experience in NLP. All annotators are native speakers.
134
+ Additionally, expert annotators were explicitly instructed to provide challenging examples by incorporating various linguistic phenomena such as numerical reasoning, structural changes, idioms, or temporal and spatial reasoning. Annotators were compensated based on hourly rate.
135
+
136
+ ### Personal and Sensitive Information
137
+
138
+ There might be some personal information coming from Wikipedia and news, especially the information of famous/important people.
139
+
140
+ ## Considerations for Using the Data
141
+
142
+ ### Social Impact of Dataset
143
+
144
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
145
+
146
+ ### Discussion of Biases
147
+
148
+ INDONLI is created using premise sentences taken from Wikipedia and news. These data sources may contain some bias.
149
+
150
+ ### Other Known Limitations
151
+
152
+ No other known limitations
153
+
154
+ ## Additional Information
155
+
156
+ ### Dataset Curators
157
+
158
+ This dataset is the result of the collaborative work of Indonesian researchers from the University of Indonesia, kata.ai, New York University, Fondazione Bruno Kessler, and the University of St Andrews.
159
+
160
+
161
+ ### Licensing Information
162
+
163
+ CC-BY-SA 4.0.
164
+
165
+ Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
166
+
167
+ ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
168
+
169
+ No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
170
+
171
+ Please contact authors for any information on the dataset.
172
+
173
+ ### Citation Information
174
+
175
+ ```
176
+ @inproceedings{mahendra-etal-2021-indonli,
177
+ title = "{I}ndo{NLI}: A Natural Language Inference Dataset for {I}ndonesian",
178
+ author = "Mahendra, Rahmad and Aji, Alham Fikri and Louvan, Samuel and Rahman, Fahrurrozi and Vania, Clara",
179
+ booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
180
+ month = nov,
181
+ year = "2021",
182
+ address = "Online and Punta Cana, Dominican Republic",
183
+ publisher = "Association for Computational Linguistics",
184
+ url = "https://aclanthology.org/2021.emnlp-main.821",
185
+ pages = "10511--10527",
186
+ }
187
+ ```
188
+
189
+ ### Contributions
190
+
191
+ Thanks to [@afaji](https://github.com/afaji) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"indonli": {"description": " IndoNLI is the first human-elicited Natural Language Inference (NLI) dataset for Indonesian.\n IndoNLI is annotated by both crowd workers and experts. The expert-annotated data is used exclusively as a test set.\n It is designed to provide a challenging test-bed for Indonesian NLI by explicitly incorporating various linguistic phenomena such as numerical reasoning, structural changes, idioms, or temporal and spatial reasoning.\n", "citation": " @inproceedings{mahendra-etal-2021-indonli,\n title = \"{I}ndo{NLI}: A Natural Language Inference Dataset for {I}ndonesian\",\n author = \"Mahendra, Rahmad and Aji, Alham Fikri and Louvan, Samuel and Rahman, Fahrurrozi and Vania, Clara\",\n booktitle = \"Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing\",\n month = nov,\n year = \"2021\",\n address = \"Online and Punta Cana, Dominican Republic\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/2021.emnlp-main.821\",\n pages = \"10511--10527\",\n }\n", "homepage": "https://github.com/ir-nlp-csui/indonli", "license": "\n CC BY-SA 4.0\n\n Attribution \u2014 You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.\n\n ShareAlike \u2014 If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.\n\n No additional restrictions \u2014 You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.\n\n", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["entailment", "neutral", "contradiction"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "indo_nli", "config_name": "indonli", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2265687, "num_examples": 10330, "dataset_name": "indo_nli"}, "validation": {"name": "validation", "num_bytes": 465299, "num_examples": 2197, "dataset_name": "indo_nli"}, "test_lay": {"name": "test_lay", "num_bytes": 473849, "num_examples": 2201, "dataset_name": "indo_nli"}, "test_expert": {"name": "test_expert", "num_bytes": 911916, "num_examples": 2984, "dataset_name": "indo_nli"}}, "download_checksums": {"https://raw.githubusercontent.com/ir-nlp-csui/indonli/main/data/indonli/train.jsonl": {"num_bytes": 3788819, "checksum": "c18a2974a8683d283d0c8eb3d944354d3c5ce54ab6232613d932c0bd2a82abf8"}, "https://raw.githubusercontent.com/ir-nlp-csui/indonli/main/data/indonli/val.jsonl": {"num_bytes": 784776, "checksum": "56a553253453bb710fde569b72b0c78a2bc52898ed6eb92b7b6d695f42ed18f1"}, "https://raw.githubusercontent.com/ir-nlp-csui/indonli/main/data/indonli/test_lay.jsonl": {"num_bytes": 796010, "checksum": "e907d015dd05b3ecb56620f680eaaf0e416f5c88b3416ad541892d872292f4d1"}, "https://raw.githubusercontent.com/ir-nlp-csui/indonli/main/data/indonli/test_expert.jsonl": {"num_bytes": 1608272, "checksum": "70a798b476c0753c36b7f8cb0b15b69797f5679cc50988089699c47f718e01b1"}}, "download_size": 6977877, "post_processing_size": null, "dataset_size": 4116751, "size_in_bytes": 11094628}}
dummy/indonli/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8dc2d28bae31a95950edcada96d4f03c1485390b37b23cacf94029a3ea548a09
3
+ size 3745
indonli.py ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """TODO: Add a description here."""
16
+
17
+
18
+ import json
19
+
20
+ import datasets
21
+
22
+
23
+ # TODO: Add BibTeX citation
24
+ # Find for instance the citation on arxiv or on the dataset repo/website
25
+ _CITATION = """\
26
+ @inproceedings{mahendra-etal-2021-indonli,
27
+ title = "{I}ndo{NLI}: A Natural Language Inference Dataset for {I}ndonesian",
28
+ author = "Mahendra, Rahmad and Aji, Alham Fikri and Louvan, Samuel and Rahman, Fahrurrozi and Vania, Clara",
29
+ booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
30
+ month = nov,
31
+ year = "2021",
32
+ address = "Online and Punta Cana, Dominican Republic",
33
+ publisher = "Association for Computational Linguistics",
34
+ url = "https://aclanthology.org/2021.emnlp-main.821",
35
+ pages = "10511--10527",
36
+ }
37
+ """
38
+
39
+ # TODO: Add description of the dataset here
40
+ # You can copy an official description
41
+ _DESCRIPTION = """\
42
+ IndoNLI is the first human-elicited Natural Language Inference (NLI) dataset for Indonesian.
43
+ IndoNLI is annotated by both crowd workers and experts. The expert-annotated data is used exclusively as a test set.
44
+ It is designed to provide a challenging test-bed for Indonesian NLI by explicitly incorporating various linguistic phenomena such as numerical reasoning, structural changes, idioms, or temporal and spatial reasoning.
45
+ """
46
+
47
+ # TODO: Add a link to an official homepage for the dataset here
48
+ _HOMEPAGE = "https://github.com/ir-nlp-csui/indonli"
49
+
50
+ # TODO: Add the licence for the dataset here if you can find it
51
+ _LICENSE = """
52
+ CC BY-SA 4.0
53
+
54
+ Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
55
+
56
+ ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
57
+
58
+ No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
59
+
60
+ """
61
+
62
+ _TRAIN_DOWNLOAD_URL = "https://raw.githubusercontent.com/ir-nlp-csui/indonli/main/data/indonli/train.jsonl"
63
+
64
+ _VALID_DOWNLOAD_URL = "https://raw.githubusercontent.com/ir-nlp-csui/indonli/main/data/indonli/val.jsonl"
65
+
66
+ _TEST_LAY_DOWNLOAD_URL = "https://raw.githubusercontent.com/ir-nlp-csui/indonli/main/data/indonli/test_lay.jsonl"
67
+
68
+ _TEST_EXPERT_DOWNLOAD_URL = "https://raw.githubusercontent.com/ir-nlp-csui/indonli/main/data/indonli/test_expert.jsonl"
69
+
70
+
71
+ class IndoNLIConfig(datasets.BuilderConfig):
72
+ """BuilderConfig for IndoNLI Config"""
73
+
74
+ def __init__(self, **kwargs):
75
+ """BuilderConfig for IndoNLI Config.
76
+ Args:
77
+ **kwargs: keyword arguments forwarded to super.
78
+ """
79
+ super(IndoNLIConfig, self).__init__(**kwargs)
80
+
81
+
82
+ class IndoNLI(datasets.GeneratorBasedBuilder):
83
+ """IndoNLI dataset -- Dataset providing natural language inference for Indonesian"""
84
+
85
+ BUILDER_CONFIGS = [
86
+ IndoNLIConfig(
87
+ name="indonli",
88
+ version=datasets.Version("1.1.0"),
89
+ description="IndoNLI: A Natural Language Inference Dataset for Indonesian",
90
+ ),
91
+ ]
92
+
93
+ def _info(self):
94
+
95
+ return datasets.DatasetInfo(
96
+ description=_DESCRIPTION,
97
+ features=datasets.Features(
98
+ {
99
+ "premise": datasets.Value("string"),
100
+ "hypothesis": datasets.Value("string"),
101
+ "label": datasets.ClassLabel(names=["entailment", "neutral", "contradiction"]),
102
+ }
103
+ ),
104
+ supervised_keys=None,
105
+ homepage=_HOMEPAGE,
106
+ license=_LICENSE,
107
+ citation=_CITATION,
108
+ )
109
+
110
+ def _split_generators(self, dl_manager):
111
+ """Returns SplitGenerators."""
112
+ train_path = dl_manager.download_and_extract(_TRAIN_DOWNLOAD_URL)
113
+ valid_path = dl_manager.download_and_extract(_VALID_DOWNLOAD_URL)
114
+ test_lay_path = dl_manager.download_and_extract(_TEST_LAY_DOWNLOAD_URL)
115
+ test_expert_path = dl_manager.download_and_extract(_TEST_EXPERT_DOWNLOAD_URL)
116
+
117
+ return [
118
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
119
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": valid_path}),
120
+ datasets.SplitGenerator(name=datasets.Split("test_lay"), gen_kwargs={"filepath": test_lay_path}),
121
+ datasets.SplitGenerator(name=datasets.Split("test_expert"), gen_kwargs={"filepath": test_expert_path}),
122
+ ]
123
+
124
+ def _generate_examples(self, filepath):
125
+ """Yields examples."""
126
+
127
+ with open(filepath, encoding="utf-8") as jsonl_file:
128
+ for id_, row in enumerate(jsonl_file):
129
+ row_jsonl = json.loads(row)
130
+ yield id_, {
131
+ "premise": row_jsonl["premise"],
132
+ "hypothesis": row_jsonl["hypothesis"],
133
+ "label": {"e": "entailment", "n": "neutral", "c": "contradiction"}[row_jsonl["label"]],
134
+ }