Datasets:

Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
albertvillanova HF staff commited on
Commit
5fe18c4
1 Parent(s): 33c0018

Convert dataset sizes from base 2 to base 10 in the dataset card (#3)

Browse files

- Convert dataset sizes from base 2 to base 10 in the dataset card (18135769b566299112d04b712e1b11aa58b1db36)

Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -104,9 +104,9 @@ dataset_info:
104
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
105
  - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
107
- - **Size of downloaded dataset files:** 33.51 MB
108
- - **Size of the generated dataset:** 85.75 MB
109
- - **Total amount of disk used:** 119.27 MB
110
 
111
  ### Dataset Summary
112
 
@@ -126,9 +126,9 @@ Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset,
126
 
127
  #### plain_text
128
 
129
- - **Size of downloaded dataset files:** 33.51 MB
130
- - **Size of the generated dataset:** 85.75 MB
131
- - **Total amount of disk used:** 119.27 MB
132
 
133
  An example of 'train' looks as follows.
134
  ```
 
104
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
105
  - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
107
+ - **Size of downloaded dataset files:** 35.14 MB
108
+ - **Size of the generated dataset:** 89.92 MB
109
+ - **Total amount of disk used:** 125.06 MB
110
 
111
  ### Dataset Summary
112
 
 
126
 
127
  #### plain_text
128
 
129
+ - **Size of downloaded dataset files:** 35.14 MB
130
+ - **Size of the generated dataset:** 89.92 MB
131
+ - **Total amount of disk used:** 125.06 MB
132
 
133
  An example of 'train' looks as follows.
134
  ```