url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.83B
node_id
stringlengths
18
32
number
int64
1
6.09k
title
stringlengths
1
290
labels
list
state
stringclasses
2 values
locked
bool
1 class
milestone
dict
comments
int64
0
54
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
comments_text
sequence
https://api.github.com/repos/huggingface/datasets/issues/721
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/721/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/721/comments
https://api.github.com/repos/huggingface/datasets/issues/721/events
https://github.com/huggingface/datasets/issues/721
718,647,147
MDU6SXNzdWU3MTg2NDcxNDc=
721
feat(dl_manager): add support for ftp downloads
[]
closed
false
null
11
2020-10-10T15:50:20Z
2022-02-15T10:44:44Z
2022-02-15T10:44:43Z
null
I am working on a new dataset (#302) and encounter a problem downloading it. ```python # This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/ _URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz" dl_manager.download_and_extract(_URL) ``` I get an error: > ValueError: unable to parse ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz as a URL or as a local path I checked, and indeed you don't consider `ftp` as a remote file. https://github.com/huggingface/datasets/blob/4c2af707a6955cf4b45f83ac67990395327c5725/src/datasets/utils/file_utils.py#L188 Adding `ftp` to that list does not immediately solve the issue, so there probably needs to be some extra work.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/721/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/721/timeline
null
completed
null
null
false
[ "We only support http by default for downloading.\r\nIf you really need to use ftp, then feel free to use a library that allows to download through ftp in your dataset script (I see that you've started working on #722 , that's awesome !). The users will get a message to install the extra library when they load the dataset.\r\n\r\nTo make the download_manager work with a custom downloader, you can call `download_manager.download_custom` instead of `download_manager.download_and_extract`. The expected arguments are the following:\r\n```\r\nurl_or_urls: url or `list`/`dict` of urls to download and extract. Each\r\n url is a `str`.\r\ncustom_download: Callable with signature (src_url: str, dst_path: str) -> Any\r\n as for example `tf.io.gfile.copy`, that lets you download from google storage\r\n```\r\n", "Also maybe it coud be interesting to have a direct support of ftp inside the `datasets` library. Do you know any good libraries that we might consider adding as a (optional ?) dependency ?", "Downloading an `ftp` file is as simple as:\r\n```python\r\nimport urllib \r\nurllib.urlretrieve('ftp://server/path/to/file', 'file')\r\n```\r\n\r\nI believe this should be supported by the library, as its not using any dependency and is trivial amount of code.", "I know its unorthodox, but I added `ftp` download support to `file_utils` in the same PR https://github.com/huggingface/datasets/pull/722\r\nSo its possible to understand the interaction of the download component with the ftp download ability", "Awesome ! I'll take a look :)", "@AmitMY Can you now download the Phoenix2014 Dataset?", "@hoanganhpham1006 yes.\r\nSee pull request https://github.com/huggingface/datasets/pull/722 , it has a loader for this dataset, mostly ready.\r\nThere's one issue that delays it being merged - https://github.com/huggingface/datasets/issues/741 - regarding memory consumption.", "The problem which I have now is that this dataset seems does not allow to download? Can you share it with me pls", "The dataset loader is not yet ready, because of that issue.\r\nIf you want to just download the dataset the old-fashioned way, just go to: https://www-i6.informatik.rwth-aachen.de/ftp/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz (the ftp link is now broken, and its available over https)", "Got it, thank you so much!", "FTP downloads are supported." ]
https://api.github.com/repos/huggingface/datasets/issues/4612
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4612/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4612/comments
https://api.github.com/repos/huggingface/datasets/issues/4612/events
https://github.com/huggingface/datasets/issues/4612
1,290,984,660
I_kwDODunzps5M8tzU
4,612
Release 2.3.0 broke custom iterable datasets
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
3
2022-07-01T06:46:07Z
2022-07-05T15:08:21Z
2022-07-05T15:08:21Z
null
## Describe the bug Trying to iterate examples from custom iterable dataset fails to bug introduced in `torch_iterable_dataset.py` since the release of 2.3.0. ## Steps to reproduce the bug ```python next(iter(custom_iterable_dataset)) ``` ## Expected results `next(iter(custom_iterable_dataset))` should return examples from the dataset ## Actual results ``` /usr/local/lib/python3.7/dist-packages/datasets/formatting/dataset_wrappers/torch_iterable_dataset.py in _set_fsspec_for_multiprocess() 16 See https://github.com/fsspec/gcsfs/issues/379 17 """ ---> 18 fsspec.asyn.iothread[0] = None 19 fsspec.asyn.loop[0] = None 20 AttributeError: module 'fsspec' has no attribute 'asyn' ``` ## Environment info - `datasets` version: 2.3.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 8.0.0 - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4612/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4612/timeline
null
completed
null
null
false
[ "Apparently, `fsspec` does not allow access to attribute-based modules anymore, such as `fsspec.async`.\r\n\r\nHowever, this is a fairly simple fix:\r\n- Change the import to: `from fsspec import asyn`;\r\n- Change line 18 to: `asyn.iothread[0] = None`;\r\n- Change line 19 to `asyn.loop[0] = None`.", "Hi! I think it's easier to replace `import fsspec` with `import fsspec.asyn` and leave the rest unchanged. @gugarosa Are you interested in submitting a PR?", "Perfect, it is even better!\r\n\r\nJust submitted the PR: #4630.\r\n\r\nThank you!" ]
https://api.github.com/repos/huggingface/datasets/issues/675
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/675/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/675/comments
https://api.github.com/repos/huggingface/datasets/issues/675/events
https://github.com/huggingface/datasets/issues/675
709,818,725
MDU6SXNzdWU3MDk4MTg3MjU=
675
Add custom dataset to NLP?
[]
closed
false
null
2
2020-09-27T21:22:50Z
2020-10-20T09:08:49Z
2020-10-20T09:08:49Z
null
Is it possible to add a custom dataset such as a .csv to the NLP library? Thanks.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/675/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/675/timeline
null
completed
null
null
false
[ "Yes you can have a look here: https://huggingface.co/docs/datasets/loading_datasets.html#csv-files", "No activity, closing" ]
https://api.github.com/repos/huggingface/datasets/issues/2751
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2751/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2751/comments
https://api.github.com/repos/huggingface/datasets/issues/2751/events
https://github.com/huggingface/datasets/pull/2751
959,021,262
MDExOlB1bGxSZXF1ZXN0NzAyMTk5MjA5
2,751
Update metadata for wikihow dataset
[]
closed
false
null
0
2021-08-03T11:31:57Z
2021-08-03T15:52:09Z
2021-08-03T15:52:09Z
null
Update metadata for wikihow dataset: - Remove leading new line character in description and citation - Update metadata JSON - Remove no longer necessary `urls_checksums/checksums.txt` file Related to #2748.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2751/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2751/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2751.diff", "html_url": "https://github.com/huggingface/datasets/pull/2751", "merged_at": "2021-08-03T15:52:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/2751.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2751" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1232
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1232/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1232/comments
https://api.github.com/repos/huggingface/datasets/issues/1232/events
https://github.com/huggingface/datasets/pull/1232
758,180,669
MDExOlB1bGxSZXF1ZXN0NTMzMzkyNTc0
1,232
Add Grail QA dataset
[]
closed
false
null
0
2020-12-07T05:46:45Z
2020-12-08T13:03:19Z
2020-12-08T13:03:19Z
null
For more information: https://dki-lab.github.io/GrailQA/
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1232/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1232/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1232.diff", "html_url": "https://github.com/huggingface/datasets/pull/1232", "merged_at": "2020-12-08T13:03:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/1232.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1232" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1127
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1127/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1127/comments
https://api.github.com/repos/huggingface/datasets/issues/1127/events
https://github.com/huggingface/datasets/pull/1127
757,229,684
MDExOlB1bGxSZXF1ZXN0NTMyNjQwMjMx
1,127
Add wikiqaar dataset
[]
closed
false
null
0
2020-12-04T16:26:18Z
2020-12-07T16:39:41Z
2020-12-07T16:39:41Z
null
Arabic Wiki Question Answering Corpus.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1127/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1127/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1127.diff", "html_url": "https://github.com/huggingface/datasets/pull/1127", "merged_at": "2020-12-07T16:39:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/1127.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1127" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1848
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1848/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1848/comments
https://api.github.com/repos/huggingface/datasets/issues/1848/events
https://github.com/huggingface/datasets/pull/1848
803,826,506
MDExOlB1bGxSZXF1ZXN0NTY5Njg5ODU1
1,848
Refactoring: Create config module
[]
closed
false
null
0
2021-02-08T18:43:51Z
2021-02-10T12:29:35Z
2021-02-10T12:29:35Z
null
Refactorize configuration settings into their own module. This could be seen as a Pythonic singleton-like approach. Eventually a config instance class might be created.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1848/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1848/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1848.diff", "html_url": "https://github.com/huggingface/datasets/pull/1848", "merged_at": "2021-02-10T12:29:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/1848.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1848" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5368
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5368/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5368/comments
https://api.github.com/repos/huggingface/datasets/issues/5368/events
https://github.com/huggingface/datasets/pull/5368
1,500,322,973
PR_kwDODunzps5FpZyx
5,368
Align remove columns behavior and input dict mutation in `map` with previous behavior
[]
closed
false
null
1
2022-12-16T14:28:47Z
2022-12-16T16:28:08Z
2022-12-16T16:25:12Z
null
Align the `remove_columns` behavior and input dict mutation in `map` with the behavior before https://github.com/huggingface/datasets/pull/5252.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5368/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5368/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5368.diff", "html_url": "https://github.com/huggingface/datasets/pull/5368", "merged_at": "2022-12-16T16:25:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/5368.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5368" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/1747
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1747/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1747/comments
https://api.github.com/repos/huggingface/datasets/issues/1747/events
https://github.com/huggingface/datasets/issues/1747
788,299,775
MDU6SXNzdWU3ODgyOTk3NzU=
1,747
datasets slicing with seed
[]
closed
false
null
2
2021-01-18T14:08:55Z
2022-10-05T12:37:27Z
2022-10-05T12:37:27Z
null
Hi I need to slice a dataset with random seed, I looked into documentation here https://huggingface.co/docs/datasets/splits.html I could not find a seed option, could you assist me please how I can get a slice for different seeds? thank you. @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1747/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1747/timeline
null
completed
null
null
false
[ "Hi :) \r\nThe slicing API from https://huggingface.co/docs/datasets/splits.html doesn't shuffle the data.\r\nYou can shuffle and then take a subset of your dataset with\r\n```python\r\n# shuffle and take the first 100 examples\r\ndataset = dataset.shuffle(seed=42).select(range(100))\r\n```\r\n\r\nYou can find more information about shuffling and selecting rows in the documentation: https://huggingface.co/docs/datasets/processing.html#selecting-sorting-shuffling-splitting-rows", "thank you so much\n\nOn Mon, Jan 18, 2021 at 3:17 PM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> Hi :)\n> The slicing API doesn't shuffle the data.\n> You can shuffle and then take a subset of your dataset with\n>\n> # shuffle and take the first 100 examplesdataset = dataset.shuffle(seed=42).select(range(100))\n>\n> You can find more information about shuffling and selecting rows in the\n> documentation:\n> https://huggingface.co/docs/datasets/processing.html#selecting-sorting-shuffling-splitting-rows\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/1747#issuecomment-762278134>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AM3GZM5D5MDPLJGI4IG3UADS2Q7GPANCNFSM4WHLOZJQ>\n> .\n>\n" ]
https://api.github.com/repos/huggingface/datasets/issues/638
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/638/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/638/comments
https://api.github.com/repos/huggingface/datasets/issues/638/events
https://github.com/huggingface/datasets/issues/638
704,146,956
MDU6SXNzdWU3MDQxNDY5NTY=
638
GLUE/QQP dataset: NonMatchingChecksumError
[]
closed
false
null
1
2020-09-18T07:09:10Z
2020-09-18T11:37:07Z
2020-09-18T11:37:07Z
null
Hi @lhoestq , I know you are busy and there are also other important issues. But if this is easy to be fixed, I am shamelessly wondering if you can give me some help , so I can evaluate my models and restart with my developing cycle asap. 😚 datasets version: editable install of master at 9/17 `datasets.load_dataset('glue','qqp', cache_dir='./datasets')` ``` Downloading and preparing dataset glue/qqp (download: 57.73 MiB, generated: 107.02 MiB, post-processed: Unknown size, total: 164.75 MiB) to ./datasets/glue/qqp/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4... --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) in ----> 1 datasets.load_dataset('glue','qqp', cache_dir='./datasets') ~/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 ~/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 467 if not downloaded_from_gcs: 468 self._download_and_prepare( --> 469 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 470 ) 471 # Sync info ~/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 527 if verify_infos: 528 verify_checksums( --> 529 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" 530 ) 531 ~/datasets/src/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 37 if len(bad_urls) > 0: 38 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 40 logger.info("All the checksums matched successfully" + for_verification_name) 41 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://dl.fbaipublicfiles.com/glue/data/QQP-clean.zip'] ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/638/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/638/timeline
null
completed
null
null
false
[ "Hi ! Sure I'll take a look" ]
https://api.github.com/repos/huggingface/datasets/issues/2158
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2158/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2158/comments
https://api.github.com/repos/huggingface/datasets/issues/2158/events
https://github.com/huggingface/datasets/issues/2158
848,506,746
MDU6SXNzdWU4NDg1MDY3NDY=
2,158
viewer "fake_news_english" error
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
closed
false
null
2
2021-04-01T14:13:20Z
2022-10-05T13:22:02Z
2022-10-05T13:22:02Z
null
When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error: > ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa: requires this pandas optional dependency for reading xlsx files' for instance' as well as the error Traceback.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2158/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2158/timeline
null
completed
null
null
false
[ "Thanks for reporting !\r\nThe viewer doesn't have all the dependencies of the datasets. We may add openpyxl to be able to show this dataset properly", "This viewer tool is deprecated now and the new viewer at https://huggingface.co/datasets/fake_news_english works fine, so I'm closing this issue" ]
https://api.github.com/repos/huggingface/datasets/issues/5252
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5252/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5252/comments
https://api.github.com/repos/huggingface/datasets/issues/5252/events
https://github.com/huggingface/datasets/pull/5252
1,451,765,838
PR_kwDODunzps5DCI1U
5,252
Support for decoding Image/Audio types in map when format type is not default one
[]
closed
false
null
6
2022-11-16T15:02:13Z
2022-12-13T17:01:54Z
2022-12-13T16:59:04Z
null
Add support for decoding the `Image`/`Audio` types in `map` for the formats (Numpy, TF, Jax, PyTorch) other than the default one (Python). Additional improvements: * make `Dataset`'s "iter" API cleaner by removing `_iter` and replacing `_iter_batches` with `iter(batch_size)` (also implemented for `IterableDataset`) * iterate over arrow tables in `map` to avoid `_getitem` calls, which are much slower than `__iter__`/`iter(batch_size)`, when the `format_type` is not Python * fix `_iter_batches` (now named `iter`) when `drop_last_batch=True` and `pyarrow<=8.0.0` is installed * lazily extract and decode arrow data in the default format TODO: * [x] update the `iter` benchmark in the docs (the `BeamBuilder` cannot load the preprocessed datasets from our bucket, so wait for this to be fixed (cc @lhoestq)) Fix https://github.com/huggingface/datasets/issues/3992, fix https://github.com/huggingface/datasets/issues/3756
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5252/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5252/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5252.diff", "html_url": "https://github.com/huggingface/datasets/pull/5252", "merged_at": "2022-12-13T16:59:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/5252.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5252" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5252). All of your documentation changes will be reflected on that endpoint.", "Yes, if the image column is the first in the batch keys, it will decode the images because it reads the actual values. We could avoid this by checking the batch type, and if it's `LazyDict`, `num_examples` is equal to `len(batch.pa_table)`, which doesn't lead to decoding.", "Good idea. This can be done in a subsequent PR btw, since it's out of scope of the original goal of this PR", "Just fixed a small bug where it would show the pyarrow 10 warning about None -> empty lists conversions even with an Array2D with no nulls", "Fixed another bug when your map function returns a mix of LazyDict or regular dict and added some tests" ]
https://api.github.com/repos/huggingface/datasets/issues/1068
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1068/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1068/comments
https://api.github.com/repos/huggingface/datasets/issues/1068/events
https://github.com/huggingface/datasets/pull/1068
756,417,337
MDExOlB1bGxSZXF1ZXN0NTMxOTY1MDk0
1,068
Add Pubmed (citation + abstract) dataset (2020).
[]
closed
false
null
4
2020-12-03T17:54:10Z
2020-12-23T09:52:07Z
2020-12-23T09:52:07Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1068/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1068/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1068.diff", "html_url": "https://github.com/huggingface/datasets/pull/1068", "merged_at": "2020-12-23T09:52:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/1068.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1068" }
true
[ "LGTM! ftp addition looks fine but maybe have a look @thomwolf ?", "It's not finished yet, I need to run the tests on the full dataset (it was running this weekend, there is an error somewhere deep)\r\n", "@yjernite Ready for review !\r\n@thomwolf \r\n\r\nSo I tried to follow closely the original format that means I still had to drop information (namely tags on elements are simply discarded for now but they don't seem to carry critical information).\r\nSome elements are also discarded they tend to not come up often:\r\n - The most notable is Author affiliation, which seems to be all over the place in terms of what it look meaning it's hard to actually get a consistent format.\r\n - Journal is the same, all the elements in there can be wildly different so I decided to drop it for now instead of trying to figure out a way to have a common presentation. (the DOI and medline ID are present so it can be recovered).\r\n\r\nI think this PR could go as it but we probably should add a way to get easier information to use with a config.\r\nFor instance `{\"title\": \"string\", \"abstract\": \"string\", \"authors\": List[str], \"substances\": List[str]}` maybe ? (substances for instance is a tricky one, some substances only have an international identifier, some have simply a common name, some both)\r\n\r\nIt's relatively easy to do I think it's mostly discarding other fields and renaming some deep structure into a flat one.", "Look ok to me but @lhoestq is the master on the Download Manager side" ]
https://api.github.com/repos/huggingface/datasets/issues/831
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/831/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/831/comments
https://api.github.com/repos/huggingface/datasets/issues/831/events
https://github.com/huggingface/datasets/issues/831
740,071,697
MDU6SXNzdWU3NDAwNzE2OTc=
831
[GEM] Add WebNLG dataset
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
0
2020-11-10T16:46:48Z
2020-12-03T13:38:01Z
2020-12-03T13:38:01Z
null
## Adding a Dataset - **Name:** WebNLG - **Description:** WebNLG consists of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation of these triples (16,095 data inputs and 42,873 data-text pairs). The data is available in English and Russian - **Paper:** https://www.aclweb.org/anthology/P17-1017.pdf - **Data:** https://webnlg-challenge.loria.fr/download/ - **Motivation:** Included in the GEM shared task, multilingual Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/831/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/831/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/4797
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4797/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4797/comments
https://api.github.com/repos/huggingface/datasets/issues/4797/events
https://github.com/huggingface/datasets/pull/4797
1,330,000,998
PR_kwDODunzps48uL-t
4,797
Torgo dataset creation
[]
closed
false
null
1
2022-08-05T14:18:26Z
2022-08-09T18:46:00Z
2022-08-09T18:46:00Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4797/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4797/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4797.diff", "html_url": "https://github.com/huggingface/datasets/pull/4797", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4797.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4797" }
true
[ "Hi @YingLi001, thanks for your proposal to add this dataset.\r\n\r\nHowever, now we add datasets directly to the Hub (instead of our GitHub repository). You have the instructions in our docs: \r\n- [Create a dataset loading script](https://huggingface.co/docs/datasets/dataset_script)\r\n- [Create a dataset card](https://huggingface.co/docs/datasets/dataset_card)\r\n- [Share](https://huggingface.co/docs/datasets/share)\r\n\r\nFeel free to ask if you need any additional support/help." ]
https://api.github.com/repos/huggingface/datasets/issues/22
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/22/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/22/comments
https://api.github.com/repos/huggingface/datasets/issues/22/events
https://github.com/huggingface/datasets/pull/22
608,298,586
MDExOlB1bGxSZXF1ZXN0NDEwMTAyMjU3
22
adding bleu score code
[]
closed
false
null
0
2020-04-28T13:00:50Z
2020-04-28T17:48:20Z
2020-04-28T17:48:08Z
null
this PR add the BLEU score metric to the lib. It can be tested by running the following code. ` from nlp.metrics import bleu hyp1 = "It is a guide to action which ensures that the military always obeys the commands of the party" ref1a = "It is a guide to action that ensures that the military forces always being under the commands of the party " ref1b = "It is the guiding principle which guarantees the military force always being under the command of the Party" ref1c = "It is the practical guide for the army always to heed the directions of the party" list_of_references = [[ref1a, ref1b, ref1c]] hypotheses = [hyp1] bleu = bleu.bleu_score(list_of_references, hypotheses,4, smooth=True) print(bleu) `
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/22/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/22/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/22.diff", "html_url": "https://github.com/huggingface/datasets/pull/22", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/22.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/22" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2891
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2891/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2891/comments
https://api.github.com/repos/huggingface/datasets/issues/2891/events
https://github.com/huggingface/datasets/pull/2891
993,161,984
MDExOlB1bGxSZXF1ZXN0NzMxMzkwNjM2
2,891
Allow dynamic first dimension for ArrayXD
[]
closed
false
null
9
2021-09-10T11:52:52Z
2021-11-23T15:33:13Z
2021-10-29T09:37:17Z
null
Add support for dynamic first dimension for ArrayXD features. See issue [#887](https://github.com/huggingface/datasets/issues/887). Following changes allow for `to_pylist` method of `ArrayExtensionArray` to return a list of numpy arrays where fist dimension can vary. @lhoestq Could you suggest how you want to extend test suit. For now I added only very limited testing.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2891/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2891/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2891.diff", "html_url": "https://github.com/huggingface/datasets/pull/2891", "merged_at": "2021-10-29T09:37:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/2891.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2891" }
true
[ "@lhoestq, thanks for your review.\r\n\r\nI added test for `to_pylist`, I didn't do that for `to_numpy` because this method shouldn't be called for dynamic dimension ArrayXD - this method will try to make a single numpy array for the whole column which cannot be done for dynamic arrays.\r\n\r\nI dig into `to_pandas()` functionality and I found it quite difficult to implement. `PandasArrayExtensionArray` takes single np.array as an argument. It might be a bit of changes to make it work with the list of arrays. Do you mind if we exclude this work from this PR. I added an error message for the case if somebody tries to use dynamic arrays with `to_pandas`", "@lhoestq, I just fixed all the tests. Let me know if there is anything else to add.", "@lhoestq, any chance you had some time to check out this PR?\r\n", "Hi ! Sorry for the delay\r\n\r\nIt looks good to me ! I think the only thing missing is the support for passing a list of numpy arrays to `map` when the first dimension is dynamic.\r\n\r\nCurrently it raises an error:\r\n```python\r\nfrom datasets import *\r\nimport numpy as np\r\n\r\nfeatures= Features({\"a\": Array3D(shape=(None, 5, 2), dtype=\"int32\")})\r\nd = Dataset.from_dict({\"a\": [np.zeros((5,5,2)), np.zeros((2,5,2))]}, features=features)\r\nd = d.map(lambda a: {\"a\": np.concatenate([a]*2)}, input_columns=\"a\")\r\nprint(d[0])\r\n```\r\nraises\r\n```python\r\nTraceback (most recent call last):\r\n File \"playground/ttest.py\", line 6, in <module>\r\n d = d.map(lambda x: {\"a\": np.concatenate([x]*2)}, input_columns=\"a\")\r\n File \"/home/truent/hf/datasets/src/datasets/arrow_dataset.py\", line 1932, in map\r\n return self._map_single(\r\n File \"/home/truent/hf/datasets/src/datasets/arrow_dataset.py\", line 426, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/home/truent/hf/datasets/src/datasets/fingerprint.py\", line 406, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"/home/truent/hf/datasets/src/datasets/arrow_dataset.py\", line 2317, in _map_single\r\n writer.finalize()\r\n File \"/home/truent/hf/datasets/src/datasets/arrow_writer.py\", line 443, in finalize\r\n self.write_examples_on_file()\r\n File \"/home/truent/hf/datasets/src/datasets/arrow_writer.py\", line 312, in write_examples_on_file\r\n pa_array = pa.array(typed_sequence)\r\n File \"pyarrow/array.pxi\", line 222, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/home/truent/hf/datasets/src/datasets/arrow_writer.py\", line 108, in __arrow_array__\r\n storage = pa.array(self.data, type.storage_dtype)\r\n File \"pyarrow/array.pxi\", line 305, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values\r\n```\r\n\r\nI think the issue is that here we don't cover the case where self.data is a list of numpy arrays:\r\n\r\nhttps://github.com/huggingface/datasets/blob/55fd140a63b8f03a0e72985647e498f1fc799d3f/src/datasets/arrow_writer.py#L104-L109\r\n\r\nWe should remove the `isinstance(self.data[0], np.ndarray)` part and add these lines to cover this case:\r\n\r\nhttps://github.com/huggingface/datasets/blob/55fd140a63b8f03a0e72985647e498f1fc799d3f/src/datasets/arrow_writer.py#L112-L113", "@lhoestq, thanks, good catch!\r\nAre you able to run this check with fixed dimension ArrayXD?\r\nfor below example\r\n```\r\nimport numpy as np\r\nfrom datasets import *\r\n\r\nfeatures = Features({\"a\": Array3D(shape=(2, 5, 2), dtype=\"int32\")})\r\nd = Dataset.from_dict({\"a\": [np.zeros((2, 5, 2)), np.zeros((2, 5, 2))]}, features=features)\r\nd = d.map(lambda a: {\"a\": np.array(a) + 1}, input_columns=\"a\")\r\nprint(d[0])\r\n```\r\n\r\nI am getting:\r\n```\r\n File \"/home/ib/datasets/src/datasets/arrow_writer.py\", line 116, in __arrow_array__\r\n if trying_type and out[0].as_py() != self.data[0]:\r\nValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()\r\n```", "Nevertheless, I tried to fix that. Let me know if that works.", "@lhoestq, just resolved the conflicts. Let me know if there is anything left to do with this PR", "Hi, thanks a lot for your comments.\r\nAgree, happy to contribute to this topic in future PRs", "Hi @rpowalski, thanks for adding this feature! \r\n\r\nI wanted to check if you are still interested in documenting this, otherwise I'd be happy to help with it :)" ]
https://api.github.com/repos/huggingface/datasets/issues/4611
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4611/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4611/comments
https://api.github.com/repos/huggingface/datasets/issues/4611/events
https://github.com/huggingface/datasets/pull/4611
1,290,940,874
PR_kwDODunzps46rxIX
4,611
Preserve member order by MockDownloadManager.iter_archive
[]
closed
false
null
1
2022-07-01T05:48:20Z
2022-07-01T16:59:11Z
2022-07-01T16:48:28Z
null
Currently, `MockDownloadManager.iter_archive` yields paths to archive members in an order given by `path.rglob("*")`, which migh not be the same order as in the original archive. See issue in: - https://github.com/huggingface/datasets/pull/4579#issuecomment-1172135027 This PR fixes the order of the members yielded by `MockDownloadManager.iter_archive` so that it is the same as in the original archive.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4611/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4611/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4611.diff", "html_url": "https://github.com/huggingface/datasets/pull/4611", "merged_at": "2022-07-01T16:48:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/4611.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4611" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/3500
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3500/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3500/comments
https://api.github.com/repos/huggingface/datasets/issues/3500/events
https://github.com/huggingface/datasets/pull/3500
1,090,406,133
PR_kwDODunzps4wXLTB
3,500
Docs: Add VCTK dataset description
[]
closed
false
null
0
2021-12-29T10:02:05Z
2022-01-04T10:46:02Z
2022-01-04T10:25:09Z
null
This PR is a very minor followup to #1837, with only docs changes (single comment string).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3500/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3500/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3500.diff", "html_url": "https://github.com/huggingface/datasets/pull/3500", "merged_at": "2022-01-04T10:25:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/3500.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3500" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/516
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/516/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/516/comments
https://api.github.com/repos/huggingface/datasets/issues/516/events
https://github.com/huggingface/datasets/pull/516
681,846,032
MDExOlB1bGxSZXF1ZXN0NDcwMTY5NTA0
516
[Breaking] Rename formated to formatted
[]
closed
false
null
0
2020-08-19T13:35:23Z
2020-08-20T08:41:17Z
2020-08-20T08:41:16Z
null
`formated` is not correct but `formatted` is
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/516/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/516/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/516.diff", "html_url": "https://github.com/huggingface/datasets/pull/516", "merged_at": "2020-08-20T08:41:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/516.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/516" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/148
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/148/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/148/comments
https://api.github.com/repos/huggingface/datasets/issues/148/events
https://github.com/huggingface/datasets/issues/148
619,590,555
MDU6SXNzdWU2MTk1OTA1NTU=
148
_download_and_prepare() got an unexpected keyword argument 'verify_infos'
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
2
2020-05-17T01:48:53Z
2020-05-18T07:38:33Z
2020-05-18T07:38:33Z
null
# Reproduce In Colab, ``` %pip install -q nlp %pip install -q apache_beam mwparserfromhell dataset = nlp.load_dataset('wikipedia') ``` get ``` Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wikipedia/20200501.aa/1.0.0... --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-52471d2a0088> in <module>() ----> 1 dataset = nlp.load_dataset('wikipedia') 1 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 515 download_mode=download_mode, 516 ignore_verifications=ignore_verifications, --> 517 save_infos=save_infos, 518 ) 519 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 361 verify_infos = not save_infos and not ignore_verifications 362 self._download_and_prepare( --> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 364 ) 365 # Sync info TypeError: _download_and_prepare() got an unexpected keyword argument 'verify_infos' ```
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/148/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/148/timeline
null
completed
null
null
false
[ "Same error for dataset 'wiki40b'", "Should be fixed on master :)" ]
https://api.github.com/repos/huggingface/datasets/issues/4881
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4881/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4881/comments
https://api.github.com/repos/huggingface/datasets/issues/4881/events
https://github.com/huggingface/datasets/issues/4881
1,348,495,777
I_kwDODunzps5QYGmh
4,881
Language names and language codes: connecting to a big database (rather than slow enrichment of custom list)
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
48
2022-08-23T20:14:24Z
2023-01-03T08:32:35Z
null
null
**The problem:** Language diversity is an important dimension of the diversity of datasets. To find one's way around datasets, being able to search by language name and by standardized codes appears crucial. Currently the list of language codes is [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/resources/languages.json), right? At about 1,500 entries, it is roughly at 1/4th of the world's diversity of extant languages. (Probably less, as the list of 1,418 contains variants that are linguistically very close: 108 varieties of English, for instance.) Looking forward to ever increasing coverage, how will the list of language names and language codes improve over time? Enrichment of the custom list by HFT contributors (like [here](https://github.com/huggingface/datasets/pull/4880)) has several issues: * progress is likely to be slow: ![image](https://user-images.githubusercontent.com/6072524/186253353-62f42168-3d31-4105-be1c-5eb1f818d528.png) (input required from reviewers, etc.) * the more contributors, the less consistency can be expected among contributions. No need to elaborate on how much confusion is likely to ensue as datasets accumulate. * there is no information on which language relates with which: no encoding of the special closeness between the languages of the Northwestern Germanic branch (English+Dutch+German etc.), for instance. Information on phylogenetic closeness can be relevant to run experiments on transfer of technology from one language to its close relatives. **A solution that seems desirable:** Connecting to an established database that (i) aims at full coverage of the world's languages and (ii) has information on higher-level groupings, alternative names, etc. It takes a lot of hard work to do such databases. Two important initiatives are [Ethnologue](https://www.ethnologue.com/) (ISO standard) and [Glottolog](https://glottolog.org/). Both have pros and cons. Glottolog contains references to Ethnologue identifiers, so adopting Glottolog entails getting the advantages of both sets of language codes. Both seem technically accessible & 'developer-friendly'. Glottolog has a [GitHub repo](https://github.com/glottolog/glottolog). For Ethnologue, harvesting tools have been devised (see [here](https://github.com/lyy1994/ethnologue); I did not try it out). In case a conversation with linguists seemed in order here, I'd be happy to participate ('pro bono', of course), & to rustle up more colleagues as useful, to help this useful development happen. With appreciation of HFT,
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4881/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4881/timeline
null
null
null
null
false
[ "Thanks for opening this discussion, @alexis-michaud.\r\n\r\nAs the language validation procedure is shared with other Hugging Face projects, I'm tagging them as well.\r\n\r\nCC: @huggingface/moon-landing ", "on the Hub side, there is not fine grained validation we just check that `language:` contains an array of lowercase strings between 2 and 3 characters long =)\r\n\r\nand for `language_bcp47:` we just check it's an array of strings.\r\n\r\nThe only page where we have a hardcoded list of languages is https://huggingface.co/languages and I've been thinking of hooking that page on an external database of languages (so any suggestion is super interesting), but it's not used for validation.\r\n\r\nThat being said, in `datasets` this file https://github.com/huggingface/datasets/blob/main/src/datasets/utils/resources/languages.json is not really used no? Or just in the tagging tool? What about just removing it?\r\n\r\nalso cc'ing @lbourdois who's been active and helpful on those subjects in the past!", "PS @alexis-michaud is there a DB of language codes you would recommend? That would contain all `ISO 639-1, 639-2 or 639-3 codes` and be kept up to date, and ideally that would be accessible as a Node.js npm package?\r\n\r\ncc @albertvillanova too", "> PS @alexis-michaud is there a DB of language codes you would recommend? That would contain all `ISO 639-1, 639-2 or 639-3 codes` and be kept up to date, and ideally that would be accessible as a Node.js npm package?\r\n> \r\n> cc @albertvillanova too\r\n\r\nMany thanks for your answer! \r\n\r\nThe Glottolog database is kept up to date, and has information on the closest ISO code for each Glottocode. So providing a clean table with equivalences sounds (to me) like something perfectly reasonable to expect from their team. \r\nTo what extent would [pyglottolog](https://github.com/glottolog/pyglottolog) fit the bill / do the job? (API documentation [here](https://pyglottolog.readthedocs.io/en/latest/index.html)) I'm reaching my technical limitations here: I can't assess the distance between what they offer and what the HF team needs. \r\nI have opened an Issue in [their repo](https://github.com/glottolog/glottolog-cldf/issues/13). \r\n\r\nVery interested to see where it goes from there.", "I just tried pyglottolog to generate a file with all the current IDs (first column).\r\n\r\n`glottolog languoids` inside the `glottolog` repository.\r\n\r\n[glottolog-languoids-v4.6-10-g5c66eec874.csv](https://github.com/huggingface/datasets/files/9417456/glottolog-languoids-v4.6-10-g5c66eec874.csv)\r\n\r\n", "Greetings @alexis-michaud and others,\r\nI think perhaps a standards-based approach here would help everyone out both at the technical and social layers of technical innovations. \r\n\r\nLet me say a few things: \r\n1. there are multiple kinds of assets in AI that should have associated language codes. \r\n * AI Training Data sets\r\n * AI models\r\n * AI outputs\r\nThese are all distinct components which should be tagged for the language and encoding methods they operate on or enhance. For example, an AI based cross-language tool from French to English (UK variety) still needs to consider if it is operating on oral language speech or written text. This is where [IANA language sub-tags](https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry) come in and are so important. I link to the official source. If one wants to use middleware such as a python package or npm package to manage strings then please make sure those packages are updating codes as they are being revised. I see that @julien-c mentioned BCP-47. BCP-47 is the current standard for language tagging. Following it will make the resources you create more findable and let future users better understand or expect any biases which may have been introduced in the different AI based products.\r\n2. BCP-47 is a technical read. However, you will notice that it identifies when to use an ISO 639-1, ISO 639-2, or ISO 639-3. code. This is important for interoperability with many systems. If you are using library systems then you should likely just stick with ISO 639-3 codes.\r\n3. If you are going to use Glottolog codes use them after an `-x-` tag in the BCP-47 format to maintain BCP-47 validity. \r\n4. You should source ISO 639-3 codes directly from the [ISO 639-3 registrar](https://iso639-3.sil.org/code_tables/639/data) as these codes are updated annually, usually in February or March. ISO 639-3 codes have multiple classes: `Active`, `Deprecated`, and `Unassigned`. This means that string length checking is not a sufficient strategy for validation.\r\n5. The names of smaller languages often change depending on the language used to describe them. The [ISO639-2 documentation](https://www.loc.gov/standards/iso639-2/php/code_list.php) has a list of language names for languages with smaller populations for languages in which descriptions about these languages are often written. For example, ISO 639-2's documentation contains the names of languages as they are used in French, German, and English. ISO 639-2 rarely is updated as it is now tied to ISO 639-3's evolution and modern systems should just use ISO 639-3, but these additional names of languages in other languages may not appear in the ISO 369-3 tables.\r\n6. Glottolog codes are also updated at least annually. Usually sometime after ISO 639-3 updates.\r\n7. Please, if the material is in a written mode, please indicate which script is used unless the IANA field has a `suppress script` value. Please use the script tag that BCP-47 calls for from [ISO 15924](https://unicode.org/iso15924/iso15924-codes.html). This also updates at least annually. \r\n8. Another great place to look for language names is the [Unicode CLDR database for locales](https://cldr.unicode.org/translation/displaynames/languagelocale-names). These ought to be congruent with ISO 639-3 but, sometimes CLDR has additional references to languages (such as the french name for a language) which is not contained in ISO 639-2 or ISO 639-3.\r\n9. Wikidata for language names is not always a great source of authoritative information. Language names are asymmetrical. Many times they are contrived because there is no actual name for the language in the language referring... e.g. French doesn't have a name for every language in the world, often they say something like: the language of 'x' people. — English does the same. When a language name standard does not have the best name for a language the best way to handle that is to make a change request with the standards registrar. Keeping track of the source list and the version of your source list for your language codes is very important. \r\n10. Finally, It would be a great service to technologist, minority language communities, and linguists if for all resources of the three types mentioned in number 1 above you added a record to [OLAC](http://www.language-archives.org/). — I can help you with that. OLAC is a search interface for language resources.\r\n", "Hi everybody!\r\n\r\nAbout the point:\r\n> also cc'ing @lbourdois who's been active and helpful on those subjects in the past!\r\n\r\nDiscussions on the need to improve the Hub's tagging system (applying to both datasets and models) can be found in the following discussion: https://github.com/huggingface/hub-docs/issues/193\r\nOnce this system has been redone and satisfies the identified needs, a redesign of the [Languages page](https://huggingface.co/languages) would also be relevant: https://github.com/huggingface/hub-docs/issues/194. \r\nI invite you to read them. But as a quick summary, the exchanges were oriented towards the ISO standard (the first HF system was based on it and it is generally the standard indicated in AI/DL papers) by favouring ISO 639-1 if it exists, and fallback to ISO 639-2 or ISO 639-3 if it doesn't. In addition, it is possible to add BCP-47 tags to consider existing varieties/regionalisms within a language (https://huggingface.co/datasets/AmazonScience/massive/discussions/1). If a language does not belong to either of these two standards, then a request should be made to the HF team to add it manually.\r\n\r\n\r\nTo return to the present discussion, thank you for the various databases and methodologies you mention. It makes a big difference to have linguists in the loop 🚀.\r\n\r\nI have a couple of questions where I think an expert perspective would be appreciated:\r\n- Do you think it's possible to easily handle tags that have been deprecated potentially for decades?\r\nFor example (I'm taking the case of Hebrew but this has happened for other languages) I tagged Google models with the \"iw\" [tag](https://huggingface.co/models?language=iw&sort=downloads) because I based it on what the authors gave in their [paper](https://arxiv.org/pdf/2010.11934.pdf) see table 6 page 12). It turns out that this ISO tag has in fact been deprecated since 1989 in favour of the \"he\" tag. It would therefore be necessary to have a verification that transforms the old tags into the most recent ones.\r\n\r\n- When you look up a language on Wikipedia, it usually shows, in addition to the ISO standard, the codes in the Glottolog (which you have already mentioned), [ELP](https://www.endangeredlanguages.com/?hl=en) and [Linguasphere](http://www.linguasphere.info/jr/index.php?l1=home&l2=welcome) databases. Would you have any opinion about these two other databases?\r\n\r\n- On the Hub, there is the following dataset where French people speak in English: https://huggingface.co/datasets/Datatang/French_Speaking_English_Speech_Data_by_Mobile_Phone \r\nIs there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\nBased on the first post in this thread that there are about 8000 languages, if one considers that a given language can be pronounced by a speaker of the other 7999, that would theoretically make about 64 million BCP-47 language1-language2 codes existing. And even much more if we consider regionalists with language1_regionalism_x-language2_regionalism_y. I guess there is no such database.\r\n\r\n- Are there any databases that take into account all the existing sign languages in the world?\r\nIt would be nice to have them included on the Hub.\r\n\r\n- Is there an international classification of languages?\r\nA bit like the [International Classification of Diseases](https://en.wikipedia.org/wiki/International_Classification_of_Diseases) in medicine, which is established by the WHO and used as a reference throughout the world. The idea would be to have a precise number of languages to which we would then have to assign a unique tag in order to find them later. \r\n\r\n- Finally for the CNRS team, when can we expect to see all the datasets of [Pangloss](https://pangloss.cnrs.fr/) on HF? 👀 And I don't know if you have a way to help to add also the datasets of [CoCoON](https://cocoon.huma-num.fr/exist/crdo/).", "> I invite you to read them. But as a quick summary, the exchanges were oriented towards the ISO standard (the first HF system was based on it and it is generally the standard indicated in AI/DL papers) by favouring ISO 639-1 if it exists, and fallback to ISO 639-2 or ISO 639-3 if it doesn't. In addition, it is possible to add BCP-47 tags to consider existing varieties/regionalisms within a language (https://huggingface.co/datasets/AmazonScience/massive/discussions/1). If a language does not belong to either of these two standards, then a request should be made to the HF team to add it manually.\r\n\r\nOne comment on this fall back system (which generally follows the BCP-47 process). ISO 639-2 has some codes which refer to a language ambiguously. For example, I believe code `ara` is used for arabic. In some contexts arabic is considered a single language, however, Egyptian Arabic is quite different from Moroccan Arabic, which are both considered separate languages. These ambiguous codes are valid ISO 639-3 codes but they have a special status. They are called `macro codes`. They exist inside the ISO 639-3 standard to provide absolute fallback compatibility between ISO 639-2 and ISO 639-3. However, when considering AI and MT applications with language data, the unforeseen potential applications and the potential for bias using macro codes should be avoided for new applications of language tags to resources. For historical cases where it is not clear what resources were used to create the AI tools or datasets then I understand the use of ambiguous tag uses. So for clarity in language tagging I suggest:\r\n\r\n1. Strictly following BCP-47\r\n2. Whenever possible avoid the use of macro tags in the ISO 639-3 standard. These are BCP-47 valid, but could introduce biases in the application of their use in society. (Generally there are more specific tags available to use in the ISO 639-3 standard.)", "> * Are there any databases that take into account all the existing sign languages in the world?\r\n> It would be nice to have them included on the Hub.\r\n\r\nSign Languages present an interesting case. As I understand the situation. The identification of sign languages has been identified as a component of their endangerment. Some sign languages do exist in ISO 639-3. For further discussion on the issue I refer readers to the following publications: \r\n\r\n* https://doi.org/10.3390/languages7010049\r\n* https://www.academia.edu/35870983/The_ethics_of_of_language_identification_and_ISO_639\r\n\r\nOne way to be BCP-47 compliant and identify a sign language which is not identified in any of the BCP-47 referenced standards is to use the ISO 639-3 code for undetermined language `und` and then apply a custom suffix indicator (as explained in BCP-47) `-x-` and a custom code, such as the ones used in https://doi.org/10.3390/languages7010049", "> * Is there an international classification of languages?\r\n> A bit like the [International Classification of Diseases](https://en.wikipedia.org/wiki/International_Classification_of_Diseases) in medicine, which is established by the WHO and used as a reference throughout the world. The idea would be to have a precise number of languages to which we would then have to assign a unique tag in order to find them later.\r\n\r\nYes that would be the function of ISO 639-3. It is the reference standard for languages. It includes a code and its name and the status of the code. Many technical metadata standards for file and computer interoperability reference it, many technical library metadata standards reference it. Some linguists use it. Many governments reference it. \r\n\r\nIndexing diseases are different from indexing languages in several ways, one way is that diseases are the impact of a pathogen not the pathogen itself. If we take COVID-19 as an example, there are many varieties of the pathogen but broadly speaking there is only one disease — with many symptoms.\r\n\r\n", ">* When you look up a language on Wikipedia, it usually shows, in addition to the ISO standard, the codes in the Glottolog (which you have already mentioned), [ELP](https://www.endangeredlanguages.com/?hl=en) and [Linguasphere](http://www.linguasphere.info/jr/index.php?l1=home&l2=welcome) databases. Would you have any opinion about these two other databases?\r\n\r\nWhile these do appear on wikipedia, I don't know of any information system which uses these codes. I do know that glottolog did import ELP data at one time and its database does contain ELP data I'm not sure if Glottolog regularly ingests new versions of ELP data. I suspect that the use of Linguasphere data may be relevant to users of wikidata as a linked data attribute but I haven't heard of any linked data projects using Linguasphere data for analysis or product development. My impression is that it is fairly unused.", "> * Do you think it's possible to easily handle tags that have been deprecated potentially for decades?\r\n>For example (I'm taking the case of Hebrew but this has happened for other languages) I [tag](https://huggingface.co/models?language=iw&sort=downloads)ged Google models with the \"iw\" tag because I based it on what the authors gave in their [paper](https://arxiv.org/pdf/2010.11934.pdf) see table 6 page 12). It turns out that this ISO tag has in fact been deprecated since 1989 in favour of the \"he\" tag. It would therefore be necessary to have a verification that transforms the old tags into the most recent ones.\r\n\r\nYes. You can parse the IANA file linked to above (it is regularly updated). All deprecated tags are marked as such in that file. The new prefered tag if there is one, is indicated. ISO 639-3 also indicates a code's status but their list is relevant only codes within their domain (ISO 639-3).", "> * On the Hub, there is the following dataset where French people speak in English: https://huggingface.co/datasets/Datatang/French_Speaking_English_Speech_Data_by_Mobile_Phone\r\nIs there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\n\r\nI would interpret `en-fr` as english as spoken in France. `fr`in this position refers to the geo-political entity not a second language. I see no reason that other linguists should have a different option after having read BCP-47 and understood how it works.\r\n\r\nThe functional goal here is to tag a language resource as being produced by nonnative speakers, while tagging both languages. There are several problems here. The first is that BCP-47 has no way explicit way to do this. One could use the sub code `x-` with a private use code to indicate a second language and infer some meaning as to that language's role. However, there is another problem here which complexifies the situation greatly... how do we know that those english speakers (in France, or from France, or who were native French speakers) were not speaking their third or fourth language rather than their second language. So to conceptualize a sub-tag which indicates the first language of a speech act for speakers in a second (or other) language would need to be carefully crafted. It might then be proposed to the appropriate authorities. For example three sub-tags exist.\r\n\r\nThere are three registered sub-tags out of a BCP-47 allowed 35. These are `x-`, `u-`, and `t-`. `u-` and `t-` are defined in [RFC6067 ](https://www.rfc-editor.org/rfc/rfc6067)and [RFC6497](https://www.rfc-editor.org/rfc/rfc6497) . For more information see the [Unicode CLDR documentation](https://cldr.unicode.org/index/bcp47-extension) where it says: \r\n\r\n\r\n>[IETF BCP 47 ](http://www.google.com/url?q=http%3A%2F%2Ftools.ietf.org%2Fhtml%2Fbcp47&sa=D&sntz=1&usg=AOvVaw1DoMN1IBGg-JHgECBvdW1t)[Tags for Identifying Languages](http://www.google.com/url?q=http%3A%2F%2Ftools.ietf.org%2Fhtml%2Fbcp47&sa=D&sntz=1&usg=AOvVaw1DoMN1IBGg-JHgECBvdW1t) defines the language identifiers (tags) used on the Internet and in many standards. It has an extension mechanism that allows additional information to be included. The Unicode Consortium is the maintainer of the extension ‘u’ for Locale Extensions, as described in [rfc6067](https://www.google.com/url?q=https%3A%2F%2Ftools.ietf.org%2Fhtml%2Frfc6067&sa=D&sntz=1&usg=AOvVaw0gGWi0EjHfy1WId8k8oKAi), and the extension 't' for Transformed Content, as described in [rfc6497](https://www.google.com/url?q=https%3A%2F%2Ftools.ietf.org%2Fhtml%2Frfc6497&sa=D&sntz=1&usg=AOvVaw0w-OUsFX1PtaKYIq31P64I).\r\n>\r\n>The subtags available for use in the 'u' extension provide language tag extensions that provide for additional information needed for identifying locales. The 'u' subtags consist of a set of keys and associated values (types). For example, a locale identifier for British English with numeric collation has the following form: en-GB-u-kn-true\r\n>\r\n>The subtags available for use in the 't' extension provide language tag extensions that provide for additional information needed for identifying transformed content, or a request to transform content in a certain way. For example, the language tag \"ja-Kana-t-it\" can be used as a content tag indicates Japanese Katakana transformed from Italian. It can also be used as a request for a given transformation.\r\n>\r\n>For more details on the valid subtags for these extensions, their syntax, and their meanings, see LDML Section 3.7 [Unicode BCP 47 Extension Data](http://www.google.com/url?q=http%3A%2F%2Fwww.unicode.org%2Freports%2Ftr35%2F%23Locale_Extension_Key_and_Type_Data&sa=D&sntz=1&usg=AOvVaw0lMthb9KbTJtoOd5mvv3Ha).", "Hi @lbourdois ! Many thanks for the detailed information.\r\n\r\n> Discussions on the need to improve the Hub's tagging system (applying to both datasets and models) can be found in the following discussion: [huggingface/hub-docs#193](https://github.com/huggingface/hub-docs/issues/193) \r\nFascinating topic! To me, the following suggestion has a lot of appeal:\r\n\"if consider that it was necessary to create an ISO 639-3 because ISO 639-1 was deficient, it would be to do the reverse and thus convert the tags from ISO 639-1 to ISO 639-2 or 3 (https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes or https://iso639-3.sil.org/code_tables/639/data).\"\r\n\r\nYes, ISO 639-1 is unsuitable because it has so few codes: less than 200. To address linguistic diversity in 'unrestricted mode', a list of all languages is wanted. \r\n\r\nThe idea of letting people use their favourite nomenclature and automatically adding the ISO 639-3 three-letter code as a tag is appealing. Thus all the HF datasets would have three-letter language tags (handy for basic search), alongside the authors' preferred tags and language names (including Glottolog tags as well as ISO 639-{1, 2}, and all other options allowed by BCP-47). \r\n\r\nRetaining the authors' original tags and language names would be best. \r\n* For language names: some people favour one name over another and it is important to respect their choice. In the case of Yongning Na: alternative names include 'Mosuo', 'Narua', 'Eastern Naxi'... and the names carry implications: people have been reported to come to blows about the use of the term 'Mosuo'. \r\n* For language tags: Glottocodes can be more fine-grained than Ethnologue (ISO 639-3), and some colleagues feel strongly about those. \r\n\r\nThus there would be a BCP-47 tag (sounds like a solid technical choice, though not 'passer-by-friendly': requiring some expertise to interpret) **plus** an ISO 639-3 tag that could be grabbed easily, and (last but not least) language names spelled out in full. Searches would be easier. No information would be lost. \r\n\r\nAre industry practices so conservative that many people are happy with two-letter codes, and consider ISO 639-3 three-letter codes an unnecessary complication? That would be a pity, since there are so many advantages to using longer lists. (Somewhat like the transition to Unicode: sooo much better!) But maybe that conservative attitude _is_ widespread, and it would then need to be taken into account. In which case, one could consider offering two-letter codes as a search option. Internally, the search engine would look up the corresponding 3-letter codes, and produce the search results accordingly. \r\n\r\nNow to the other questions:\r\n\r\n> * Do you think it's possible to easily handle tags that have been deprecated potentially for decades?\r\n> For example (I'm taking the case of Hebrew but this has happened for other languages) I tagged Google models with the \"iw\" [tag](https://huggingface.co/models?language=iw&sort=downloads) because I based it on what the authors gave in their [paper](https://arxiv.org/pdf/2010.11934.pdf) see table 6 page 12). It turns out that this ISO tag has in fact been deprecated since 1989 in favour of the \"he\" tag. It would therefore be necessary to have a verification that transforms the old tags into the most recent ones.\r\nI guess that the above suggestion takes care of this case. The original tag (in this example, \"iw\") is retained (facilitating cross-reference with the published paper, and respecting the real: the way the dataset was originally tagged). This old tag goes into the `BCP-47` field of the dataset, which can handle quirks & oddities like this one. And a new tag is added in the `ISO 639-3` field: the 3-letter code \"heb\". \r\n\r\n> * When you look up a language on Wikipedia, it usually shows, in addition to the ISO standard, the codes in the Glottolog (which you have already mentioned), [ELP](https://www.endangeredlanguages.com/?hl=en) and [Linguasphere](http://www.linguasphere.info/jr/index.php?l1=home&l2=welcome) databases. Would you have any opinion about these two other databases?\r\n\r\nI'm afraid I never heard about Linguasphere. The [online register for Linguasphere (PDF)](http://www.linguasphere.info/jr/pdf/index/LS_index_n-n.pdf) seems to be from 1999-2000. It seems that the level of interoperability is not very high right now. (By contrast, Glottolog has [pyglottolog](https://github.com/glottolog/pyglottolog) and in my experience contacts flow well.) \r\n\r\nThe Endangered Languages Project is something Google started but initially did not 'push' very strongly, it seems. Just airing an opinion on the public Internet, it seems that the project is now solidly rooted at University of Hawaiʻi at Mānoa. It seems that they do not generate codes of their own. They refer to ISO 639-3 (Ethnologue) as a code authority when applicable, and otherwise provide comments in so many words, such as that language L currently lacks an Ethnologue code of its own (example [here](https://www.endangeredlanguages.com/lang/10624)). \r\n\r\n> * On the Hub, there is the following dataset where French people speak in English: https://huggingface.co/datasets/Datatang/French_Speaking_English_Speech_Data_by_Mobile_Phone\r\n> Is there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\n> Based on the first post in this thread that there are about 8000 languages, if one considers that a given language can be pronounced by a speaker of the other 7999, that would theoretically make about 64 million BCP-47 language1-language2 codes existing. And even much more if we consider regionalists with language1_regionalism_x-language2_regionalism_y. I guess there is no such database.\r\n\r\nYes, you noted the difficulty here: that there are so many possible situations. Eventually, each dataset would required descriptors of its own. @BenjaminGalliot points out that, in addition to specifying the speakers' native languages, the degree of language proficiency would also be relevant. How many years did the speakers spend in which area? Talking which languages? In what chronological order? Etc. The complexity defies encoding. The purpose of language codes is to allow for searches that group resources into sets that make sense. Additional information is very important, but would seem to be a matter for 'comments' fields. \r\n\r\n> * Is there an international classification of languages?\r\n> A bit like the [International Classification of Diseases](https://en.wikipedia.org/wiki/International_Classification_of_Diseases) in medicine, which is established by the WHO and used as a reference throughout the world. The idea would be to have a precise number of languages to which we would then have to assign a unique tag in order to find them later.\r\n\r\nAs I understand, Ethnologue and Glottolog both try to do that, each in its own way. The simile with diseases seems interesting, to some extent: in both cases it's about human classification of phenomena that have complexity (though some diseases are simpler than others, whereas all languages have much complexity, in different ways).\r\n\r\n> * Finally, when can we expect to see all the datasets of [Pangloss](https://pangloss.cnrs.fr/) on HF? eyes And I don't know if you have a way to help to add also the datasets of [CoCoON](https://cocoon.huma-num.fr/exist/crdo/).\r\n\r\nThree concerns: (i) Technical specifications: we have not yet received feedback on the Japhug and Na datasets in HF. There may be technical considerations that we have not yet thought of and that would need to be taken into account before 'bulk upload'. (ii) Would there be a way to automate the process? The way @BenjaminGalliot did it for Japhug and Na, there was a manual component involved, and doing it by hand for all 200 datasets would not be an ideal workflow, given that the metadata are all clearly arranged. (iii) Some datasets are currently under a 'No derivatives' CreativeCommons license. We could go back to the depositors and argue that the 'No derivatives' mention were best omitted (see [here a similar argument about publications](https://creativecommons.org/2020/04/21/academic-publications-under-no-derivatives-licenses-is-misguided/)): again, we'd want to be sure about the way forward before we set the process into motion.\r\n\r\nOur hope would be that some colleagues try out the [OutilsPangloss](https://gitlab.com/lacito/outilspangloss) download tool, assemble datasets from Pangloss/Cocoon as they wish, then deposit them to HF.", "> The idea of letting people use their favourite nomenclature and automatically adding the ISO 639-3 three-letter code as a tag is appealing. Thus all the HF datasets would have three-letter language tags (handy for basic search), alongside the authors' preferred tags and language names (including Glottolog tags as well as ISO 639-{1, 2}, and all other options allowed by BCP-47).\r\n> \r\n> Retaining the authors' original tags and language names would be best.\r\n> \r\n> * For language names: some people favour one name over another and it is important to respect their choice. In the case of Yongning Na: alternative names include 'Mosuo', 'Narua', 'Eastern Naxi'... and the names carry implications: people have been reported to come to blows about the use of the term 'Mosuo'.\r\n> * For language tags: Glottocodes can be more fine-grained than Ethnologue (ISO 639-3), and some colleagues feel strongly about those.\r\n> \r\n> Thus there would be a BCP-47 tag (sounds like a solid technical choice, though not 'passer-by-friendly': requiring some expertise to interpret) **plus** an ISO 639-3 tag that could be grabbed easily, and (last but not least) language names spelled out in full. Searches would be easier. No information would be lost.\r\n\r\n@alexis-michaud raises an excellent point. Language Resource users have varying search habits (or approaches). This includes cases where two or more language names refer to a single language. A search utility/interface needs to be flexible and able to present results from various kinds of input in the search process. This could be like how the terms French/Français/Franzosisch (en/fr/de) are names for the same language or it could be a variety of the following: autoglottonyms (how the speakers of the language refer to their language), or exoglottonyms (how others refer to the language). Additionally, in web based searches I have also needed to implement diacritic sensitive and insensitive logic so that users can type with or without diacritics and not have results unnecessarily excluded. \r\n\r\nDepending on how detailed of a search problem HF seeks to solve. It may be better to off load complex search to search engines like OLAC which aggregate a lot of language resources. — as I mentioned above I can assist with the informatics on creating an OLAC feed.\r\n\r\nAbstracting search logic from actual metadata may prove a useful way to lower the technical debt overhead. Technical tools and library standards use ISO and BCP-47 Standards. So, from a bibliographic metadata perspective this seems to be the way forward with the widest set of use cases. ", "To get a visual idea of these first exchanges, I coded a Streamlit app that I put online on Spaces: https://huggingface.co/spaces/lbourdois/Language-tags-demo. \r\nThe code is in Python so I don't know if it can be used by HF who seems to need something in Node.js but it serves as a proof of concept. The advantage is also that you can directly test ideas by enter things in a search bar and see what comes up. \r\n\r\nThis application is divided into 3 points:\r\n- The first is to enter a language in natural language to get its code which can then be filled in the YAML file of the README.MD files of the HF datasets or models in order to be referenced and found by everyone.\r\nIn practice, enter the language (e.g: `English`) you are interested in to get its associated tag (e.g: `en`). You can enter several languages by separating them with a comma (e.g `French,English,German`). You will be given priority to the ISO 639-3 code if it exists otherwise the Glottocode or the BCP47 code (for varieties in particular). If none of these codes are available, it links to a page where the user can contact HF to request to add this tag. \r\nIf you enter a BCP47 code, it must be entered as follows: `Language(Territory)`, for example `French(Canada)`. Attention! If you enter a BCP-47 language, it must be entered first, otherwise the plant code will be displayed. I have to fix this problem but I am moving to a new place, I don't have an internet connection when I want and I prefer to push this first version so that you can already test things now and not have to wait days or weeks.\r\nThis point is intended to simulate the user's side of the equation, which wonders which tag he should fill in for his language.\r\n\r\n\r\n- The second is to enter a language code to obtain the name of the language in natural language.\r\nIn practice, enter the tag (ISO 639-1/2/3, Glottolog or BCP-47) you are interested in (e.g: `fra`) to get its associated language (e.g: French). You can enter several languages by separating them with a comma (e.g `fra,eng,deu`). Attention! If you enter a BCP-47 code, it must be entered first, otherwise the plant code will be displayed. Same as the other bug above (it's actually the same one).\r\nThis point is intended to simulate the side of HF that for a given tag must return the correct language.\r\n\r\n\r\n\r\nTo code these two points, I tested two approaches. \r\n\r\n1. The first one (internal DB in the app) consists in querying a database that HF would have locally at their place. To create this database, I merged the ISO 639 database (https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab) and the Glottolog database (https://glottolog.org/meta/downloads). The result of this merge is visible in the 3rd point of the application qui is an overview of the database.\r\nIn the image below, on line 1 of the database, we can see that the Glottocode database gives an ISO 639-3 code (column ISO639P3code) but not the ISO 639 database (column 639-3). Do you have an explanation for this phenomenon?\r\n![image](https://user-images.githubusercontent.com/58078086/188433217-bf7cb606-7af4-40b5-861f-ed662468f6e4.png)\r\n\r\n\r\nFor BCP 47 codes of the type `fr-CA`, I have retrieved the ISO-3166 alpha 1 codes of the territories (https://www.iso.org/iso-3166-country-codes.html).\r\nIn practice, what I do is if we enter `fr-CA` is that the letters before the `-` refer to a language in the `Name` column for a `639-1` == `fr` (`639-3` for `fra` or `fre`) in the base of my image above. Then I look at the letters after the `-` which refers to a territory. It comes out `French (Canada)`. I used https://cldr.unicode.org/translation/displaynames/languagelocale-name-patterns for the pattern that came up.\r\n\r\n\r\n2. The second approach (with langcodes lib in the app) consists in using the Python `langcodes` library (https://github.com/rspeer/langcodes) which offers a lot of features in ready-made functions. It manages for example deprecated codes, the validity of an entered code, gives languages from code in the language of your choice (by default in English, but also autoglottonyms), etc. I invite you to read the README of the library. The only negative point is that it hasn't been updated for 10 months so basing your tag system on an external tool that isn't necessarily up to date can cause problems in the long run. But it is certainly an interesting source.\r\n\r\nFinally, I have added some information on the number of people speaking/reading the language(s) searched (figures provided by langcodes which are based on those given by ISO). This is not relevant for our topic but it could be figures that could be added as information on the https://huggingface.co/languages page. \r\n\r\n\r\n\r\nWhat could be done to improve the app if I have time:\r\n- Write the text for the app's homepage to describe what it does. This could serve as a basis for a documentation that I think will be necessary to add somewhere on the HF website to explain how the language tagging system works.\r\n- Deal with the bug mentioned above\r\n- Integrate ISO 3166-1 alpha 2 territories (https://www.iso.org/obp/ui#iso:pub:PUB500001:en)? They offer a finer granularity than ISO 3166-1 alpha 1 which is limited to the country level, but they are very administrative (for French, ISO 3166-1 alpha 2 gives us the \"départements\" for example).\r\n- Add autoglottonyms? (I only handle English language names for the moment)\r\n- For each language indicate to which family it belongs, in practice this could help to make data augmentation, but especially to classify the languages and find them more easily on the page https://huggingface.co/languages.", "Very impressive! Using the prompt 'Japhug' (a language name), the app finds the intended language:\r\n![image](https://user-images.githubusercontent.com/6072524/188441805-3af3a580-951e-4150-b5f9-64e1bde0992b.png)\r\n\r\nA first question: based on the Glottocode, would it be possible to grab the closest ISO639-3 code? In case there is no match for the exact language variety, one needs to explore the higher-level groupings, level by level. For this language (Japhug), the information provided in the extracted CSV file (`glottolog-languoids-v4.6.csv`) is: \r\n`sino1245/burm1265/naqi1236/qian1263/rgya1241/core1262/jiar1240` \r\nOne need not look further than the first higher-level grouping, [`jiar1240`](https://glottolog.org/resource/languoid/id/jiar1240), to get an ISO639-3 code, namely `jya`.\r\n\r\nThus users searching by language names would get ISO639-3 (often less fine-grained than Glottolog) as a bonus.\r\nIt might be possible to ask the Glottolog team to provide this piece of information as part of an export from their database.", "> on line 1 of the database, we can see that the Glottocode database gives an ISO 639-3 code (column ISO639P3code) but not the ISO 639 database (column 639-3). Do you have an explanation for this phenomenon?\r\n\r\nThat is because the language name 'Aewa' is not found in the Ethnologue (ISO 639-3) export that you are using. [This export in table form](https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab) only has one reference name (`Ref_Name`). For the language at issue, it is not 'Aewa' but ['Awishira'](https://www.ethnologue.com/language/ash).\r\n\r\nBy contrast, the language on line 0 of the database is called 'Abinomn' by both Ethnologue and Glottolog, and accordingly, columns `ISO639P3code` and `639-3` both contain the ISO 639-3 code, `bsa`.\r\n \r\nThe full Ethnologue database records alternate names for each language, and I'd bet that 'Aewa' is recorded among alternate names for the 'Ashiwira' language. I can't check because the full Ethnologue database is paywalled. \r\n![image](https://user-images.githubusercontent.com/6072524/188461409-e8c48036-df9b-4b56-9609-41cb9c3d3c3c.png)\r\n\r\n[Glottolog](https://glottolog.org/resource/languoid/id/abis1238) does provide the corresponding ISO 639-3 code for 'Aewa', `ash`, which is an exact match (it refers to the same variety as Glottolog `abis1238`).\r\nIn this specific case, Glottolog provides all the relevant information. I'd say that Glottolog can be trusted for all the codes they provide, including ISO 639-3 codes: they only include them when the match is good. \r\n\r\nSee previous comment about the cases where there is no exact match between Glottolog and ISO 639-3 (suggested workaround: look at a higher-level grouping to get an ISO 639-3 code).", "I will add these two points to my TODO list.\r\n- Since Glottolog can be trust, I will add a condition to the code that if there is no ISO 639-3 code in the \"official\" database (https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab), look for it in the \"ISO639P3code\" column of Glottolog.\r\n- For the point of adding the closest ISO 639-3 code for a Glottolog code, what convention should be adopted for the output? Just the ISO 639-3 code, or the ISO 639-3 code - Glottolog code, or the ISO 639-3 code - language name?\r\nTo use the example of `Japhug` , should it be just `jya`, or `jya-japh1234` or `jya-Japhug`?", "> * Integrate ISO 3166-1 alpha 2 territories (https://www.iso.org/obp/ui#iso:pub:PUB500001:en)? They offer a finer granularity than ISO 3166-1 alpha 1 which is limited to the country level, but they are very administrative (for French, ISO 3166-1 alpha 2 gives us the \"départements\" for example).\r\n\r\nI'm concerned with this sort of exploration. Not because I am against innovation. In fact this is an interesting thought exercise. However, to explore this thought further creates cognitive dissidence between BCP-47 authorized codes and other code sets which are not BP-47 compliant. For that reason, I think adding additional codes is a waste of time both for HF devs and for future users who get a confusing idea about language tagging. ", "Good job for the application!\r\n\r\n> On the Hub, there is the following dataset where French people speak in English: https://huggingface.co/datasets/Datatang/French_Speaking_English_Speech_Data_by_Mobile_Phone\r\n Is there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\n Based on the first post in this thread that there are about 8000 languages, if one considers that a given language can be pronounced by a speaker of the other 7999, that would theoretically make about 64 million BCP-47 language1-language2 codes existing. And even much more if we consider regionalists with language1_regionalism_x-language2_regionalism_y. I guess there is no such database.\r\n\r\n> Yes, you noted the difficulty here: that there are so many possible situations. Eventually, each dataset would required descriptors of its own. @BenjaminGalliot points out that, in addition to specifying the speakers' native languages, the degree of language proficiency would also be relevant. How many years did the speakers spend in which area? Talking which languages? In what chronological order? Etc. The complexity defies encoding. The purpose of language codes is to allow for searches that group resources into sets that make sense. Additional information is very important, but would seem to be a matter for 'comments' fields.\r\n\r\nTo briefly complete what I said on this subject in a private discussion group, there is a lot of (meta)data associated with each element of a corpus (which language level, according to which criteria, knowing that even among native speakers there are differences, some of which may go beyond what seems obvious to us from a linguistic point of view, such as socio-professional category, life history, environment in the broad sense, etc.), which can be placed in ad-hoc columns, or more freely in a comment/note column. And it is the role of the researcher (in this case a linguist, in all likelihood) to do analyses (statistics...) to determine the relevant data, including criteria that may justify separating different languages (in the broad sense), making separate corpora, etc. Putting this information in the language code is in my opinion doing the job in the opposite and wrong direction, as well as bringing other problems, like where to stop in the list of multidimensional criteria to be integrated, so in my opinion, here, the minimum is the best (the important thing is in my opinion to have well-documented data, globally, by sub-corpus or by line)...\r\n\r\n> If you are going to use Glottolog codes use them after an -x- tag in the BCP-47 format to maintain BCP-47 validity.\r\n\r\nYes, for the current corpora, I have written:\r\n\r\n```\r\nlanguage:\r\n- jya\r\n- nru\r\nlanguage_bcp47:\r\n- x-japh1234\r\n- x-yong1288\r\n```\r\n\r\n> * Add autoglottonyms? (I only handle English language names for the moment)\r\n\r\nAutoglossonyms are useful (I use them prior to other glossonyms), but I'm not sure there is an easy way to retrieve them. We can find some of them in the \"Alternative Names\" panel of Glottolog, but even if we have an API to retrieve them easily, their associated language code will often not be the one we are in (hence the need to do several cycles to find one, which might not be the right one...). Maybe this problem needs more investigation...\r\n\r\n> For the point of adding the closest ISO 639-3 code for a Glottolog code, what convention should be adopted for the output? Just the ISO 639-3 code, or the ISO 639-3 code - Glottolog code, or the ISO 639-3 code - language name?\r\nTo use the example of Japhug , should it be just jya, or jya-japh1234 or jya-Japhug?\r\n\r\nI strongly insist not to add **a** language name after the code, it would restart a spiral of problems, notably the choice of the language in question:\r\n* the autoglossonym: in my opinion the best choice, but you have to know it…\r\n* the English name: iniquitous,\r\n* the name in the administratively/politically dominant language of the target language if it is relevant (strictly localized without overlapping, for example): iniquitous and tendentious (and in a way a special case of the previous one)...\r\n* etc.\r\n", "> To get a visual idea of these first exchanges, I coded a Streamlit app that I put online on Spaces: https://huggingface.co/spaces/lbourdois/Language-tags-demo.\r\n> The code is in Python so I don't know if it can be used by HF who seems to need something in Node.js but it serves as a proof of concept. The advantage is also that you can directly test ideas by enter things in a search bar and see what comes up.\r\n\r\nThis is really great. You're doing a fantastic job. I love watching the creative process evolve. It is exciting. Let me provide some links to some search interfaces for further inspiration. I always find it helpful to know how others have approached a problem when figuring out my approach. I will link to three examples Glottolog, r12a's language sub-tag chooser, and the FLEx project builder wizard. The first two are online, but the last one is in an application which must be downloaded and works only on windows or linux. I have placed some notes on each of the screenshots.\r\n\r\n* **[Glottolog](https://glottolog.org/)** | [Search Query](https://glottolog.org/glottolog?name=en&namequerytype=part&multilingual=on#2/20.9/150.0) \r\n\r\n![Glottolog1](https://user-images.githubusercontent.com/40230/188494425-84ee6ecf-6868-4684-a4ae-008973f3b367.png)\r\n![Glottolog2](https://user-images.githubusercontent.com/40230/188494426-fc1c225c-f99a-46b5-a1aa-950cf7912ce3.png)\r\n\r\n\r\n* **[r12a language sub-tag chooser](https://r12a.github.io/app-subtags/)** | [Code on github](https://github.com/r12a/app-subtags)\r\n\r\n![r12a1](https://user-images.githubusercontent.com/40230/188495349-8e53be68-8433-46ff-a0c7-c2f6e25458b6.png)\r\n\r\n\r\n* **FLEx Language Chooser** | [application page](https://software.sil.org/fieldworks/)\r\n![FLEx1](https://user-images.githubusercontent.com/40230/188499742-82c5601e-7e37-4863-bd63-8bff8c0694e3.png)\r\n\r\n", "> In practice, what I do is if we enter `fr-CA` is that the letters before the `-` refer to a language in the `Name` column for a `639-1` == `fr` (`639-3` for `fra` or `fre`) in the base of my image above. Then I look at the letters after the `-` which refers to a territory. It comes out `French (Canada)`. I used https://cldr.unicode.org/translation/displaynames/languagelocale-name-patterns for the pattern that came up.\r\n\r\nWhat you are doing is looking at the algorithm for Locale generation rather than BCP-47's original documentation. I'm not sure there are difference, there might be. I know that locale IDs generally follow BCP-47 But I think there are some differences such as the use of `_` vs. `-`. ", "> A first question: based on the Glottocode, would it be possible to grab the closest ISO639-3 code? In case there is no match for the exact language variety, one needs to explore the higher-level groupings, level by level. For this language (Japhug), the information provided in the extracted CSV file (`glottolog-languoids-v4.6.csv`) is: `sino1245/burm1265/naqi1236/qian1263/rgya1241/core1262/jiar1240` One need not look further than the first higher-level grouping, [`jiar1240`](https://glottolog.org/resource/languoid/id/jiar1240), to get an ISO639-3 code, namely `jya`.\r\n> \r\n> Thus users searching by language names would get ISO639-3 (often less fine-grained than Glottolog) as a bonus. It might be possible to ask the Glottolog team to provide this piece of information as part of an export from their database.\r\n\r\nThis is logical, but the fine grained assertions are not the same. That is just because they are in a hierarchical structure today doesn't mean they will be tomorrow. In some cases the glottolog is clearly referring to sub-language variants which will never receive full language status, whereas in other cases glottolog is referencing to unequal entities one or more of which should be a language. Many of the codes in glottolog have no associated documentation indicating what sort of speech variety they are. ", "@lbourdois \r\n> * Since Glottolog can be trust, I will add a condition to the code that if there is no ISO 639-3 code in the \"official\" database (https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab), look for it in the \"ISO639P3code\" column of Glottolog.\r\n\r\nI'm confused here... if there is no ISO639-3 code in the official database from the registrar, why would you look for an \"unofficial\" code from someone else? What is the use case here?", "> For the point of adding the closest ISO 639-3 code for a Glottolog code, what convention should be adopted for the output? Just the ISO 639-3 code, or the ISO 639-3 code - Glottolog code, or the ISO 639-3 code - language name?\r\nTo use the example of Japhug , should it be just jya, or jya-japh1234 or jya-Japhug?\r\n\r\n(answer edited in view of [Benjamin Galliot's comment](https://github.com/huggingface/datasets/issues/4881#issuecomment-1237420600) \r\nEasy part of the answer first: jya-Japhug is out, because, as @BenjaminGalliot pointed out above, mixing language names with language codes will make trouble. For Japhug, `jya-Japhug` looks rather good: the pair looks nice, the one (`jya`) packed together, the other (`Japhug`) good and complete while still pretty compact. But think about languages like 'Yongning Na' or 'Yucatán Maya': a code with a space in the middle, like `nru-Yongning Na`, is really unsightly and unwieldy, not?\r\n\r\nSome [principles for language naming in English](http://hdl.handle.net/10125/24725) have been put forward, with some linguistic arguments, but always supposing that such standardization is desirable, actual standardization of language names in English may well never happen.\r\n\r\nAs for `jya-japh1234`: again, at first sight it seems cute, combining two fierce competitors (Ethnologue and Glottolog) into something that gets the best of both worlds. \r\nBut @HughP has a point: _adding additional codes is a waste of time both for HF devs and for future users who get a confusing idea about language tagging_ Strong wording, for an important comment: better stick with BCP 47. \r\n\r\nSo the solution pointed out by Benjamin, from Frances Gillis-Webber and Sabine Tittel, looks attractive: \r\njya-x-japh1234\r\n\r\nOn the other hand, if the idea for HF Datasets is simply to add the closest ISO 639-3 code for a Glottolog code, maybe it could be provided simply in three letters: providing the 'raw' ISO 639-3 code `jya`. Availability of 'straight' ISO 639-3 codes could save trouble for some users, and those who want more detail could look at the rest of the metadata and general information associated with datasets.", "The problem seems to have already been raised here: https://drops.dagstuhl.de/opus/volltexte/2019/10368/pdf/OASIcs-LDK-2019-4.pdf\r\n\r\nAn example can be seen here :\r\n\r\n> 3.1.2 The use of privateuse sub-tag\r\nIn light of unambiguous language codes being available for the two Khoisan varieties, we\r\npropose to combine the ISO 639-3 code for the parent language N‖ng, i.e., ‘ngh’, with the\r\nprivateuse sub-tag ‘x-’ and the respective Glottocodes stated above.\r\nThe language tags for N|uu and ‖’Au can then be defined accordingly:\r\nN|uu: ngh-x-nuuu1242\r\n‖’Au: ngh-x-auni1243\r\n\r\nBy the way, while searching for this, I came across this application: https://huggingface.co/spaces/cdleong/langcode-search", "> > * Since Glottolog can be trust, I will add a condition to the code that if there is no ISO 639-3 code in the \"official\" database (https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab), look for it in the \"ISO639P3code\" column of Glottolog.\r\n> \r\n> I'm confused here... if there is no ISO639-3 code in the official database from the registrar, why would you look for an \"unofficial\" code from someone else? What is the use case here?\r\n\r\nHi @HughP, I'm happy to clear what confusion may exist here :innocent: Here is the use case. \r\nGuillaume Jacques (@rgyalrong) put together a sizeable corpus of the Japhug language. It is up on HF Datasets ([here](https://huggingface.co/datasets/Lacito/pangloss/viewer/japh1234)) as well as on Zenodo. \r\n\r\nZenodo is an all-purpose repository without adequate domain-specific metadata (\"[métadonnées métier](https://www.cines.fr/archivage/des-expertises/les-metadonnees/metadonnees-metier/)\"), and the deposits in there are not easy to locate. The Zenodo deposit is intended for a highly specific user case: someone reads about the dataset in a paper, goes to the address on Zenodo and grabs the dataset at one go. \r\n\r\nHF Datasets, on the other hand, allows users to look around among corpora. The Japhug corpus needs proper tagging so that HF Datasets users can find out about it. \r\nJaphug has an entry of its own in Glottolog, whereas it lacks an entry of its own in Ethnologue. Hence the practical usefulness of Glottolog. Ethnologue pools together, under the code `jya`, three different languages (Japhug, Tshobdun `tsho1240` and Zbu `zbua1234`). \r\n\r\nI hope that this helps.", "> By the way, while searching for this, I came across this application: https://huggingface.co/spaces/cdleong/langcode-search\r\n\r\nReally relevant Space, so tagging its author @cdleong, just in case!", "@cdleong A one-stop shop for language codes: terrific!\r\nHow do you feel about the use of Glottocodes? When searching the language names 'Japhug' and 'Yongning Na' (real examples, related to a HF Datasets deposit & various research projects), the relevant Glottocodes are retrieved, and that is great (and not that easy, notably with the space in the middle of 'Yongning Na'). But this positive result is 'hidden' in the results page. Specifically: \r\n\r\n- for Japhug: when searching by language name ('Japhug'), the result in big print is 'Failure', even though there is an available Glottocode (at bottom).\r\n![image](https://user-images.githubusercontent.com/6072524/188604619-a5032f53-6d2c-4751-b83b-bf70a5bf3b22.png)\r\nWhen searching by Glottocode (japh1234), same outcome: 'Result: failure!' (even though this _is_ the right Glottocode\r\nWhen searching by x-japh1234 (Glottocode encapsulated in BCP 47 syntax), one gets the message \r\n\r\n> ''x-japh1234' parses meaningfully as a language tag according to IANA\"\r\n\r\nbut there is paradoxically no link provided to Glottolog: the 'Glottolog' part of the results page is empty\r\n![image](https://user-images.githubusercontent.com/6072524/188605698-91a39982-ae70-4c48-94ae-cceeb06c25f5.png)\r\n\r\n- Yongning Na: the correct code is identified (yong1288) but instead of foregrounding this exact match, the first result that comes up is a completely different language, called 'Yong'. \r\n\r\nTrying to formulate a conclusion (admittedly, this note is not based on intensive testing, it is just feedback on initial contact): from a user perspective, it seems that the tool could make more extensive use of Glottolog. `langcode-search` does a great job querying Glottolog, why not make more extensive use of that information? (including: to arrive at the nearest ISO 639-3 code)" ]
https://api.github.com/repos/huggingface/datasets/issues/3387
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3387/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3387/comments
https://api.github.com/repos/huggingface/datasets/issues/3387/events
https://github.com/huggingface/datasets/pull/3387
1,071,836,456
PR_kwDODunzps4vbAyC
3,387
Create Language Modeling task
[]
closed
false
null
0
2021-12-06T07:56:07Z
2021-12-17T17:18:28Z
2021-12-17T17:18:27Z
null
Create Language Modeling task to be able to specify the input "text" column in a dataset. This can be useful for datasets which are not exclusively used for language modeling and have more than one column: - for text classification datasets (with columns "review" and "rating", for example), the Language Modeling task can be used to specify the "text" column ("review" in this case). TODO: - [ ] Add the LanguageModeling task to all dataset scripts which can be used for language modeling
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3387/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3387/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3387.diff", "html_url": "https://github.com/huggingface/datasets/pull/3387", "merged_at": "2021-12-17T17:18:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/3387.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3387" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/6082
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6082/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6082/comments
https://api.github.com/repos/huggingface/datasets/issues/6082/events
https://github.com/huggingface/datasets/pull/6082
1,824,819,672
PR_kwDODunzps5WkdIn
6,082
Release: 2.14.1
[]
closed
false
null
4
2023-07-27T17:05:54Z
2023-07-27T17:18:17Z
2023-07-27T17:08:38Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6082/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6082/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/6082.diff", "html_url": "https://github.com/huggingface/datasets/pull/6082", "merged_at": "2023-07-27T17:08:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/6082.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6082" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6082). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007215 / 0.011353 (-0.004138) | 0.004101 / 0.011008 (-0.006907) | 0.085884 / 0.038508 (0.047376) | 0.085375 / 0.023109 (0.062266) | 0.351610 / 0.275898 (0.075712) | 0.399284 / 0.323480 (0.075804) | 0.005598 / 0.007986 (-0.002388) | 0.003405 / 0.004328 (-0.000923) | 0.064906 / 0.004250 (0.060656) | 0.059000 / 0.037052 (0.021948) | 0.354589 / 0.258489 (0.096100) | 0.406070 / 0.293841 (0.112229) | 0.031627 / 0.128546 (-0.096919) | 0.008597 / 0.075646 (-0.067049) | 0.291050 / 0.419271 (-0.128221) | 0.054120 / 0.043533 (0.010587) | 0.366242 / 0.255139 (0.111103) | 0.375975 / 0.283200 (0.092776) | 0.025608 / 0.141683 (-0.116074) | 1.473514 / 1.452155 (0.021359) | 1.543226 / 1.492716 (0.050510) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198068 / 0.018006 (0.180062) | 0.450583 / 0.000490 (0.450093) | 0.005368 / 0.000200 (0.005168) | 0.000102 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028323 / 0.037411 (-0.009089) | 0.089058 / 0.014526 (0.074533) | 0.097718 / 0.176557 (-0.078839) | 0.154546 / 0.737135 (-0.582590) | 0.098224 / 0.296338 (-0.198115) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386292 / 0.215209 (0.171083) | 3.846222 / 2.077655 (1.768567) | 1.858695 / 1.504120 (0.354575) | 1.685885 / 1.541195 (0.144690) | 1.790727 / 1.468490 (0.322237) | 0.486771 / 4.584777 (-4.098006) | 3.658363 / 3.745712 (-0.087349) | 5.345236 / 5.269862 (0.075374) | 3.215942 / 4.565676 (-1.349734) | 0.057580 / 0.424275 (-0.366695) | 0.007382 / 0.007607 (-0.000225) | 0.464174 / 0.226044 (0.238129) | 4.640848 / 2.268929 (2.371920) | 2.383152 / 55.444624 (-53.061472) | 2.013288 / 6.876477 (-4.863188) | 2.244142 / 2.142072 (0.102069) | 0.585408 / 4.805227 (-4.219819) | 0.134698 / 6.500664 (-6.365966) | 0.060641 / 0.075469 (-0.014828) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.258414 / 1.841788 (-0.583374) | 19.825848 / 8.074308 (11.751540) | 14.644025 / 10.191392 (4.452633) | 0.169198 / 0.680424 (-0.511226) | 0.018180 / 0.534201 (-0.516021) | 0.395100 / 0.579283 (-0.184183) | 0.411543 / 0.434364 (-0.022821) | 0.463364 / 0.540337 (-0.076973) | 0.628613 / 1.386936 (-0.758323) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006860 / 0.011353 (-0.004493) | 0.003981 / 0.011008 (-0.007027) | 0.065589 / 0.038508 (0.027081) | 0.082460 / 0.023109 (0.059350) | 0.362980 / 0.275898 (0.087082) | 0.394837 / 0.323480 (0.071357) | 0.005298 / 0.007986 (-0.002688) | 0.003372 / 0.004328 (-0.000957) | 0.064918 / 0.004250 (0.060667) | 0.058033 / 0.037052 (0.020981) | 0.367259 / 0.258489 (0.108770) | 0.403122 / 0.293841 (0.109281) | 0.031566 / 0.128546 (-0.096980) | 0.008583 / 0.075646 (-0.067063) | 0.071287 / 0.419271 (-0.347984) | 0.049586 / 0.043533 (0.006053) | 0.359252 / 0.255139 (0.104113) | 0.378519 / 0.283200 (0.095319) | 0.023412 / 0.141683 (-0.118271) | 1.494522 / 1.452155 (0.042367) | 1.559176 / 1.492716 (0.066460) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228396 / 0.018006 (0.210390) | 0.441865 / 0.000490 (0.441375) | 0.000395 / 0.000200 (0.000195) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031169 / 0.037411 (-0.006242) | 0.093427 / 0.014526 (0.078901) | 0.100673 / 0.176557 (-0.075883) | 0.152817 / 0.737135 (-0.584319) | 0.102226 / 0.296338 (-0.194112) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437032 / 0.215209 (0.221823) | 4.376078 / 2.077655 (2.298423) | 2.346928 / 1.504120 (0.842808) | 2.168573 / 1.541195 (0.627378) | 2.261024 / 1.468490 (0.792534) | 0.497080 / 4.584777 (-4.087697) | 3.594402 / 3.745712 (-0.151310) | 5.090361 / 5.269862 (-0.179501) | 3.034750 / 4.565676 (-1.530927) | 0.058538 / 0.424275 (-0.365737) | 0.007892 / 0.007607 (0.000285) | 0.517643 / 0.226044 (0.291598) | 5.173174 / 2.268929 (2.904246) | 2.825917 / 55.444624 (-52.618708) | 2.542593 / 6.876477 (-4.333884) | 2.716290 / 2.142072 (0.574218) | 0.598253 / 4.805227 (-4.206974) | 0.135610 / 6.500664 (-6.365054) | 0.062113 / 0.075469 (-0.013356) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.389554 / 1.841788 (-0.452233) | 20.412868 / 8.074308 (12.338560) | 14.539988 / 10.191392 (4.348596) | 0.162046 / 0.680424 (-0.518378) | 0.018508 / 0.534201 (-0.515693) | 0.398840 / 0.579283 (-0.180443) | 0.400902 / 0.434364 (-0.033462) | 0.463647 / 0.540337 (-0.076691) | 0.612921 / 1.386936 (-0.774015) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#45bef1810d9341ba4cb27547d748fddb97843792 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005943 / 0.011353 (-0.005410) | 0.003582 / 0.011008 (-0.007426) | 0.080030 / 0.038508 (0.041522) | 0.057458 / 0.023109 (0.034349) | 0.390783 / 0.275898 (0.114885) | 0.430926 / 0.323480 (0.107446) | 0.003207 / 0.007986 (-0.004778) | 0.003592 / 0.004328 (-0.000737) | 0.062468 / 0.004250 (0.058217) | 0.046739 / 0.037052 (0.009687) | 0.394343 / 0.258489 (0.135854) | 0.435912 / 0.293841 (0.142071) | 0.026812 / 0.128546 (-0.101734) | 0.007954 / 0.075646 (-0.067692) | 0.261415 / 0.419271 (-0.157857) | 0.044665 / 0.043533 (0.001132) | 0.403454 / 0.255139 (0.148315) | 0.418946 / 0.283200 (0.135747) | 0.022247 / 0.141683 (-0.119436) | 1.456387 / 1.452155 (0.004232) | 1.508234 / 1.492716 (0.015518) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.182487 / 0.018006 (0.164480) | 0.416343 / 0.000490 (0.415854) | 0.001404 / 0.000200 (0.001204) | 0.000062 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023643 / 0.037411 (-0.013768) | 0.071798 / 0.014526 (0.057272) | 0.083623 / 0.176557 (-0.092933) | 0.146023 / 0.737135 (-0.591112) | 0.083094 / 0.296338 (-0.213245) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417484 / 0.215209 (0.202275) | 4.157393 / 2.077655 (2.079738) | 1.950438 / 1.504120 (0.446318) | 1.766639 / 1.541195 (0.225444) | 1.807382 / 1.468490 (0.338892) | 0.496061 / 4.584777 (-4.088716) | 2.975001 / 3.745712 (-0.770711) | 3.340608 / 5.269862 (-1.929254) | 2.236293 / 4.565676 (-2.329384) | 0.056946 / 0.424275 (-0.367329) | 0.006506 / 0.007607 (-0.001101) | 0.480377 / 0.226044 (0.254332) | 4.788525 / 2.268929 (2.519597) | 2.430139 / 55.444624 (-53.014485) | 2.154145 / 6.876477 (-4.722332) | 2.321623 / 2.142072 (0.179551) | 0.584040 / 4.805227 (-4.221188) | 0.124508 / 6.500664 (-6.376156) | 0.060828 / 0.075469 (-0.014641) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.201641 / 1.841788 (-0.640146) | 18.066232 / 8.074308 (9.991924) | 14.022304 / 10.191392 (3.830912) | 0.146573 / 0.680424 (-0.533850) | 0.016892 / 0.534201 (-0.517308) | 0.333259 / 0.579283 (-0.246024) | 0.357795 / 0.434364 (-0.076568) | 0.391265 / 0.540337 (-0.149072) | 0.551378 / 1.386936 (-0.835558) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005706 / 0.011353 (-0.005647) | 0.003448 / 0.011008 (-0.007560) | 0.063146 / 0.038508 (0.024638) | 0.056292 / 0.023109 (0.033183) | 0.355533 / 0.275898 (0.079635) | 0.394996 / 0.323480 (0.071517) | 0.004270 / 0.007986 (-0.003716) | 0.002790 / 0.004328 (-0.001538) | 0.063033 / 0.004250 (0.058783) | 0.044684 / 0.037052 (0.007631) | 0.370621 / 0.258489 (0.112132) | 0.401074 / 0.293841 (0.107233) | 0.026737 / 0.128546 (-0.101809) | 0.007872 / 0.075646 (-0.067774) | 0.068815 / 0.419271 (-0.350457) | 0.040976 / 0.043533 (-0.002557) | 0.370733 / 0.255139 (0.115594) | 0.387418 / 0.283200 (0.104218) | 0.018854 / 0.141683 (-0.122829) | 1.479834 / 1.452155 (0.027680) | 1.536388 / 1.492716 (0.043672) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222125 / 0.018006 (0.204119) | 0.408007 / 0.000490 (0.407517) | 0.000367 / 0.000200 (0.000167) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025100 / 0.037411 (-0.012311) | 0.076617 / 0.014526 (0.062091) | 0.088311 / 0.176557 (-0.088246) | 0.143785 / 0.737135 (-0.593350) | 0.088349 / 0.296338 (-0.207989) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419246 / 0.215209 (0.204037) | 4.172413 / 2.077655 (2.094759) | 2.199355 / 1.504120 (0.695235) | 2.025158 / 1.541195 (0.483963) | 2.074491 / 1.468490 (0.606001) | 0.495893 / 4.584777 (-4.088884) | 2.998858 / 3.745712 (-0.746854) | 2.770531 / 5.269862 (-2.499331) | 1.817497 / 4.565676 (-2.748179) | 0.057317 / 0.424275 (-0.366958) | 0.006723 / 0.007607 (-0.000884) | 0.491062 / 0.226044 (0.265017) | 4.906155 / 2.268929 (2.637226) | 2.654916 / 55.444624 (-52.789708) | 2.299873 / 6.876477 (-4.576604) | 2.451438 / 2.142072 (0.309366) | 0.585048 / 4.805227 (-4.220179) | 0.124778 / 6.500664 (-6.375886) | 0.062067 / 0.075469 (-0.013402) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.298239 / 1.841788 (-0.543549) | 18.090238 / 8.074308 (10.015930) | 13.822568 / 10.191392 (3.631176) | 0.130560 / 0.680424 (-0.549864) | 0.016662 / 0.534201 (-0.517539) | 0.333337 / 0.579283 (-0.245946) | 0.348493 / 0.434364 (-0.085871) | 0.386049 / 0.540337 (-0.154289) | 0.511156 / 1.386936 (-0.875780) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#029956a347b0306cd27f693e12cf9a82acf4ef80 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006014 / 0.011353 (-0.005339) | 0.003623 / 0.011008 (-0.007385) | 0.080500 / 0.038508 (0.041992) | 0.057713 / 0.023109 (0.034603) | 0.325976 / 0.275898 (0.050078) | 0.359986 / 0.323480 (0.036506) | 0.004709 / 0.007986 (-0.003277) | 0.002933 / 0.004328 (-0.001395) | 0.063457 / 0.004250 (0.059207) | 0.047514 / 0.037052 (0.010462) | 0.331629 / 0.258489 (0.073140) | 0.382048 / 0.293841 (0.088207) | 0.026949 / 0.128546 (-0.101597) | 0.008043 / 0.075646 (-0.067604) | 0.262152 / 0.419271 (-0.157119) | 0.045271 / 0.043533 (0.001738) | 0.333355 / 0.255139 (0.078216) | 0.347996 / 0.283200 (0.064796) | 0.020814 / 0.141683 (-0.120868) | 1.460723 / 1.452155 (0.008568) | 1.488845 / 1.492716 (-0.003872) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193735 / 0.018006 (0.175728) | 0.431433 / 0.000490 (0.430943) | 0.002494 / 0.000200 (0.002294) | 0.000066 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023762 / 0.037411 (-0.013650) | 0.072680 / 0.014526 (0.058154) | 0.081687 / 0.176557 (-0.094869) | 0.143224 / 0.737135 (-0.593911) | 0.083083 / 0.296338 (-0.213255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397393 / 0.215209 (0.182184) | 3.954643 / 2.077655 (1.876989) | 1.950038 / 1.504120 (0.445919) | 1.760551 / 1.541195 (0.219357) | 1.871165 / 1.468490 (0.402675) | 0.508645 / 4.584777 (-4.076132) | 3.114379 / 3.745712 (-0.631333) | 3.474554 / 5.269862 (-1.795307) | 2.090126 / 4.565676 (-2.475551) | 0.058008 / 0.424275 (-0.366267) | 0.006465 / 0.007607 (-0.001142) | 0.475009 / 0.226044 (0.248965) | 4.767981 / 2.268929 (2.499052) | 2.372050 / 55.444624 (-53.072574) | 2.038094 / 6.876477 (-4.838383) | 2.072819 / 2.142072 (-0.069253) | 0.591913 / 4.805227 (-4.213314) | 0.125002 / 6.500664 (-6.375662) | 0.060055 / 0.075469 (-0.015414) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.234171 / 1.841788 (-0.607617) | 18.121476 / 8.074308 (10.047168) | 13.727313 / 10.191392 (3.535921) | 0.136021 / 0.680424 (-0.544402) | 0.016505 / 0.534201 (-0.517696) | 0.331400 / 0.579283 (-0.247883) | 0.346019 / 0.434364 (-0.088345) | 0.378985 / 0.540337 (-0.161353) | 0.522606 / 1.386936 (-0.864330) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006035 / 0.011353 (-0.005318) | 0.003584 / 0.011008 (-0.007425) | 0.061953 / 0.038508 (0.023445) | 0.059416 / 0.023109 (0.036307) | 0.359380 / 0.275898 (0.083482) | 0.396842 / 0.323480 (0.073363) | 0.004716 / 0.007986 (-0.003269) | 0.002825 / 0.004328 (-0.001504) | 0.061697 / 0.004250 (0.057447) | 0.049009 / 0.037052 (0.011956) | 0.363099 / 0.258489 (0.104610) | 0.403672 / 0.293841 (0.109831) | 0.027722 / 0.128546 (-0.100824) | 0.007966 / 0.075646 (-0.067680) | 0.067455 / 0.419271 (-0.351816) | 0.042530 / 0.043533 (-0.001003) | 0.361257 / 0.255139 (0.106118) | 0.388957 / 0.283200 (0.105758) | 0.021845 / 0.141683 (-0.119838) | 1.431989 / 1.452155 (-0.020166) | 1.503131 / 1.492716 (0.010415) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241493 / 0.018006 (0.223487) | 0.429319 / 0.000490 (0.428829) | 0.002604 / 0.000200 (0.002404) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026227 / 0.037411 (-0.011184) | 0.077177 / 0.014526 (0.062651) | 0.085840 / 0.176557 (-0.090717) | 0.142280 / 0.737135 (-0.594855) | 0.088465 / 0.296338 (-0.207873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434912 / 0.215209 (0.219703) | 4.339664 / 2.077655 (2.262009) | 2.242495 / 1.504120 (0.738375) | 2.091353 / 1.541195 (0.550159) | 2.161425 / 1.468490 (0.692935) | 0.501647 / 4.584777 (-4.083130) | 3.075326 / 3.745712 (-0.670386) | 4.091557 / 5.269862 (-1.178304) | 2.776425 / 4.565676 (-1.789251) | 0.057338 / 0.424275 (-0.366937) | 0.006767 / 0.007607 (-0.000840) | 0.506882 / 0.226044 (0.280837) | 5.059074 / 2.268929 (2.790146) | 2.706665 / 55.444624 (-52.737959) | 2.370253 / 6.876477 (-4.506224) | 2.505421 / 2.142072 (0.363348) | 0.590289 / 4.805227 (-4.214938) | 0.125990 / 6.500664 (-6.374674) | 0.062778 / 0.075469 (-0.012691) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.361287 / 1.841788 (-0.480501) | 18.500726 / 8.074308 (10.426418) | 13.844459 / 10.191392 (3.653067) | 0.144416 / 0.680424 (-0.536008) | 0.016987 / 0.534201 (-0.517214) | 0.336237 / 0.579283 (-0.243046) | 0.357116 / 0.434364 (-0.077248) | 0.402062 / 0.540337 (-0.138275) | 0.543066 / 1.386936 (-0.843870) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#029956a347b0306cd27f693e12cf9a82acf4ef80 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007559 / 0.011353 (-0.003794) | 0.004379 / 0.011008 (-0.006629) | 0.089702 / 0.038508 (0.051194) | 0.065104 / 0.023109 (0.041995) | 0.362016 / 0.275898 (0.086118) | 0.376768 / 0.323480 (0.053288) | 0.006538 / 0.007986 (-0.001447) | 0.004167 / 0.004328 (-0.000161) | 0.074138 / 0.004250 (0.069888) | 0.052753 / 0.037052 (0.015701) | 0.366367 / 0.258489 (0.107878) | 0.389121 / 0.293841 (0.095280) | 0.042820 / 0.128546 (-0.085727) | 0.012560 / 0.075646 (-0.063086) | 0.359235 / 0.419271 (-0.060037) | 0.074250 / 0.043533 (0.030718) | 0.384051 / 0.255139 (0.128912) | 0.385450 / 0.283200 (0.102250) | 0.046270 / 0.141683 (-0.095413) | 1.593275 / 1.452155 (0.141120) | 1.704207 / 1.492716 (0.211490) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249390 / 0.018006 (0.231384) | 0.614347 / 0.000490 (0.613857) | 0.012641 / 0.000200 (0.012441) | 0.000126 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029099 / 0.037411 (-0.008312) | 0.090966 / 0.014526 (0.076440) | 0.102273 / 0.176557 (-0.074284) | 0.167564 / 0.737135 (-0.569571) | 0.106118 / 0.296338 (-0.190220) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.536122 / 0.215209 (0.320913) | 5.448464 / 2.077655 (3.370809) | 2.461977 / 1.504120 (0.957857) | 2.081506 / 1.541195 (0.540311) | 2.091509 / 1.468490 (0.623019) | 0.810307 / 4.584777 (-3.774470) | 5.161304 / 3.745712 (1.415592) | 4.525070 / 5.269862 (-0.744792) | 2.886313 / 4.565676 (-1.679363) | 0.093992 / 0.424275 (-0.330283) | 0.008516 / 0.007607 (0.000909) | 0.691978 / 0.226044 (0.465934) | 6.834665 / 2.268929 (4.565737) | 3.284355 / 55.444624 (-52.160270) | 2.496803 / 6.876477 (-4.379674) | 2.814387 / 2.142072 (0.672315) | 0.985300 / 4.805227 (-3.819928) | 0.210343 / 6.500664 (-6.290321) | 0.075459 / 0.075469 (-0.000010) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.436073 / 1.841788 (-0.405714) | 22.722401 / 8.074308 (14.648093) | 19.988521 / 10.191392 (9.797129) | 0.229757 / 0.680424 (-0.450667) | 0.029672 / 0.534201 (-0.504529) | 0.479914 / 0.579283 (-0.099369) | 0.605106 / 0.434364 (0.170743) | 0.511668 / 0.540337 (-0.028670) | 0.800281 / 1.386936 (-0.586655) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008665 / 0.011353 (-0.002688) | 0.006009 / 0.011008 (-0.004999) | 0.073377 / 0.038508 (0.034869) | 0.077188 / 0.023109 (0.054079) | 0.451422 / 0.275898 (0.175524) | 0.484640 / 0.323480 (0.161160) | 0.006266 / 0.007986 (-0.001719) | 0.004129 / 0.004328 (-0.000200) | 0.063102 / 0.004250 (0.058851) | 0.064653 / 0.037052 (0.027601) | 0.439521 / 0.258489 (0.181032) | 0.458964 / 0.293841 (0.165123) | 0.046018 / 0.128546 (-0.082528) | 0.014109 / 0.075646 (-0.061537) | 0.095727 / 0.419271 (-0.323544) | 0.070133 / 0.043533 (0.026600) | 0.440143 / 0.255139 (0.185004) | 0.502468 / 0.283200 (0.219269) | 0.034582 / 0.141683 (-0.107101) | 1.656282 / 1.452155 (0.204127) | 1.784641 / 1.492716 (0.291925) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.303111 / 0.018006 (0.285105) | 0.599194 / 0.000490 (0.598705) | 0.000411 / 0.000200 (0.000211) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033061 / 0.037411 (-0.004350) | 0.096073 / 0.014526 (0.081548) | 0.095347 / 0.176557 (-0.081209) | 0.161004 / 0.737135 (-0.576131) | 0.111544 / 0.296338 (-0.184794) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.615695 / 0.215209 (0.400486) | 5.794243 / 2.077655 (3.716588) | 2.594720 / 1.504120 (1.090600) | 2.566255 / 1.541195 (1.025060) | 2.573653 / 1.468490 (1.105163) | 0.873653 / 4.584777 (-3.711124) | 5.353323 / 3.745712 (1.607611) | 4.604974 / 5.269862 (-0.664887) | 2.901282 / 4.565676 (-1.664394) | 0.099614 / 0.424275 (-0.324661) | 0.010368 / 0.007607 (0.002761) | 0.775490 / 0.226044 (0.549446) | 7.245449 / 2.268929 (4.976520) | 3.740165 / 55.444624 (-51.704459) | 2.986132 / 6.876477 (-3.890345) | 3.092510 / 2.142072 (0.950438) | 1.022461 / 4.805227 (-3.782766) | 0.212137 / 6.500664 (-6.288527) | 0.084534 / 0.075469 (0.009065) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.687983 / 1.841788 (-0.153805) | 23.491808 / 8.074308 (15.417500) | 20.722165 / 10.191392 (10.530773) | 0.231011 / 0.680424 (-0.449413) | 0.028309 / 0.534201 (-0.505892) | 0.436911 / 0.579283 (-0.142372) | 0.583126 / 0.434364 (0.148762) | 0.559712 / 0.540337 (0.019374) | 0.820645 / 1.386936 (-0.566291) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#029956a347b0306cd27f693e12cf9a82acf4ef80 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006538 / 0.011353 (-0.004815) | 0.003952 / 0.011008 (-0.007056) | 0.084183 / 0.038508 (0.045675) | 0.070616 / 0.023109 (0.047507) | 0.320491 / 0.275898 (0.044593) | 0.352021 / 0.323480 (0.028541) | 0.005330 / 0.007986 (-0.002656) | 0.003400 / 0.004328 (-0.000928) | 0.066392 / 0.004250 (0.062141) | 0.052529 / 0.037052 (0.015477) | 0.329581 / 0.258489 (0.071092) | 0.374437 / 0.293841 (0.080596) | 0.031379 / 0.128546 (-0.097167) | 0.008576 / 0.075646 (-0.067070) | 0.288621 / 0.419271 (-0.130650) | 0.052748 / 0.043533 (0.009215) | 0.319911 / 0.255139 (0.064772) | 0.358169 / 0.283200 (0.074970) | 0.023128 / 0.141683 (-0.118555) | 1.479578 / 1.452155 (0.027424) | 1.566351 / 1.492716 (0.073635) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217616 / 0.018006 (0.199610) | 0.471546 / 0.000490 (0.471056) | 0.003880 / 0.000200 (0.003680) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027716 / 0.037411 (-0.009696) | 0.081718 / 0.014526 (0.067192) | 0.095457 / 0.176557 (-0.081100) | 0.150746 / 0.737135 (-0.586389) | 0.096061 / 0.296338 (-0.200277) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406811 / 0.215209 (0.191602) | 4.062757 / 2.077655 (1.985103) | 2.060658 / 1.504120 (0.556538) | 1.870944 / 1.541195 (0.329749) | 1.908984 / 1.468490 (0.440493) | 0.489053 / 4.584777 (-4.095724) | 3.571038 / 3.745712 (-0.174674) | 3.255351 / 5.269862 (-2.014511) | 2.007078 / 4.565676 (-2.558599) | 0.057078 / 0.424275 (-0.367197) | 0.007240 / 0.007607 (-0.000367) | 0.485641 / 0.226044 (0.259596) | 4.841657 / 2.268929 (2.572729) | 2.569676 / 55.444624 (-52.874949) | 2.151119 / 6.876477 (-4.725357) | 2.330337 / 2.142072 (0.188265) | 0.581721 / 4.805227 (-4.223506) | 0.132591 / 6.500664 (-6.368073) | 0.060491 / 0.075469 (-0.014978) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.237699 / 1.841788 (-0.604089) | 19.460306 / 8.074308 (11.385998) | 14.123006 / 10.191392 (3.931614) | 0.155669 / 0.680424 (-0.524754) | 0.018385 / 0.534201 (-0.515816) | 0.393330 / 0.579283 (-0.185953) | 0.408890 / 0.434364 (-0.025474) | 0.457348 / 0.540337 (-0.082989) | 0.640293 / 1.386936 (-0.746643) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006582 / 0.011353 (-0.004771) | 0.003950 / 0.011008 (-0.007059) | 0.064636 / 0.038508 (0.026128) | 0.077651 / 0.023109 (0.054541) | 0.365505 / 0.275898 (0.089607) | 0.393370 / 0.323480 (0.069890) | 0.005466 / 0.007986 (-0.002520) | 0.003314 / 0.004328 (-0.001014) | 0.064960 / 0.004250 (0.060710) | 0.057355 / 0.037052 (0.020302) | 0.377773 / 0.258489 (0.119284) | 0.408394 / 0.293841 (0.114553) | 0.031698 / 0.128546 (-0.096848) | 0.008575 / 0.075646 (-0.067071) | 0.070390 / 0.419271 (-0.348881) | 0.050035 / 0.043533 (0.006502) | 0.360461 / 0.255139 (0.105323) | 0.384862 / 0.283200 (0.101662) | 0.025380 / 0.141683 (-0.116303) | 1.484429 / 1.452155 (0.032275) | 1.542944 / 1.492716 (0.050227) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.190193 / 0.018006 (0.172187) | 0.468996 / 0.000490 (0.468506) | 0.003012 / 0.000200 (0.002812) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031488 / 0.037411 (-0.005923) | 0.088673 / 0.014526 (0.074147) | 0.101886 / 0.176557 (-0.074670) | 0.156774 / 0.737135 (-0.580361) | 0.102818 / 0.296338 (-0.193520) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428019 / 0.215209 (0.212810) | 4.271369 / 2.077655 (2.193714) | 2.271530 / 1.504120 (0.767410) | 2.085172 / 1.541195 (0.543977) | 2.143439 / 1.468490 (0.674949) | 0.493468 / 4.584777 (-4.091309) | 3.569030 / 3.745712 (-0.176683) | 4.777962 / 5.269862 (-0.491900) | 2.872115 / 4.565676 (-1.693562) | 0.058200 / 0.424275 (-0.366075) | 0.007657 / 0.007607 (0.000050) | 0.502874 / 0.226044 (0.276830) | 5.026721 / 2.268929 (2.757792) | 2.734301 / 55.444624 (-52.710324) | 2.396072 / 6.876477 (-4.480405) | 2.574322 / 2.142072 (0.432249) | 0.593855 / 4.805227 (-4.211373) | 0.135134 / 6.500664 (-6.365530) | 0.061491 / 0.075469 (-0.013978) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.320522 / 1.841788 (-0.521265) | 19.933221 / 8.074308 (11.858912) | 14.055921 / 10.191392 (3.864529) | 0.149620 / 0.680424 (-0.530804) | 0.018590 / 0.534201 (-0.515611) | 0.399550 / 0.579283 (-0.179733) | 0.410463 / 0.434364 (-0.023901) | 0.469872 / 0.540337 (-0.070465) | 0.616481 / 1.386936 (-0.770455) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#029956a347b0306cd27f693e12cf9a82acf4ef80 \"CML watermark\")\n" ]
https://api.github.com/repos/huggingface/datasets/issues/5648
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5648/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5648/comments
https://api.github.com/repos/huggingface/datasets/issues/5648/events
https://github.com/huggingface/datasets/issues/5648
1,629,253,719
I_kwDODunzps5hHHBX
5,648
flatten_indices doesn't work with pandas format
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
1
2023-03-17T12:44:25Z
2023-03-21T13:12:03Z
null
null
### Describe the bug Hi, I noticed that `flatten_indices` throws an error when the batch format is `pandas`. This is probably due to the fact that flatten_indices uses map internally which doesn't accept dataframes as the transformation function output ### Steps to reproduce the bug tabular_data = pd.DataFrame(np.random.randn(10,10)) tabular_data = datasets.arrow_dataset.Dataset.from_pandas(tabular_data) tabular_data.with_format("pandas").select([0,1,2,3]).flatten_indices() ### Expected behavior No error thrown ### Environment info - `datasets` version: 2.10.1 - Python version: 3.9.5 - PyArrow version: 11.0.0 - Pandas version: 1.4.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5648/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5648/timeline
null
null
null
null
false
[ "Thanks for reporting! This can be fixed by setting the format to `arrow` in `flatten_indices` and restoring the original format after the flattening. I'm working on a PR that reduces the number of the `flatten_indices` calls in our codebase and makes `flatten_indices` a no-op when a dataset does not have an indices mapping, so I'll incorporate the fix in that PR." ]
https://api.github.com/repos/huggingface/datasets/issues/5537
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5537/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5537/comments
https://api.github.com/repos/huggingface/datasets/issues/5537/events
https://github.com/huggingface/datasets/issues/5537
1,587,567,464
I_kwDODunzps5eoFto
5,537
Increase speed of data files resolution
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues", "id": 3761482852, "name": "good second issue", "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue" } ]
open
false
null
5
2023-02-16T12:11:45Z
2023-04-07T17:32:45Z
null
null
Certain datasets like `bigcode/the-stack-dedup` have so many files that loading them takes forever right from the data files resolution step. `datasets` uses file patterns to check the structure of the repository but it takes too much time to iterate over and over again on all the data files. This comes from `resolve_patterns_in_dataset_repository` which calls `_resolve_single_pattern_in_dataset_repository`, which iterates on all the files at ```python glob_iter = [PurePath(filepath) for filepath in fs.glob(PurePath(pattern).as_posix()) if fs.isfile(filepath)] ``` but calling `glob` on such a dataset is too expensive. Indeed it calls `ls()` in `hffilesystem.py` too many times. Maybe `glob` can be more optimized in `hffilesystem.py`, or the data files resolution can directly be implemented in the filesystem by checking its `dir_cache` ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5537/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5537/timeline
null
null
null
null
false
[ "#self-assign", "You were right, if `self.dir_cache` is not None in glob, it is exactly the same as what is returned by find, at least for all the tests we have, and some extended evaluation I did across a random sample of about 1000 datasets. \r\n\r\nThanks for the nice hints, and let me know if this is not exactly what we want here!\r\n\r\nsee PR: https://github.com/huggingface/datasets/pull/5704\r\n\r\n", "I think we can make the data files resolution (significantly) faster in 2 steps:\r\n\r\n1. `glob` calls `find` (which in turn calls `ls`), so we need `find` to be fast, and this can be achieved by fetching all the entries in a single API call and avoiding calls to `ls`. Implementing this for `HfFileSystem.find` (the one in `huggingface_hub`) is on my TO-DO list.\r\n2. caching the repeated `find` calls in `_get_data_files_patterns` when the `data_files` patterns are not provided in `load_dataset`. To address this, we can introduce a `_resolve_single_pattern` function that would accept a filesystem object and a list of regex patterns to resolve. Then we can wrap this filesystem object in `_get_data_files_patterns` with an object that would cache the find calls before resolving the patterns with `_resolve_single_pattern`. (Feel free to suggest a cleaner implementation)\r\n\r\nWDYT?", "Good idea :) \r\n\r\nFor 2:\r\n\r\nThat would work ! It's also possible to have a FileSystem with a cache on `.find` and use it inside the resolver passed to `_get_data_files_patterns`. Right now they're pretty simple:\r\n\r\n```python\r\n# for remote repositories\r\nresolver = partial(_resolve_single_pattern_in_dataset_repository, dataset_info, base_path=base_path)\r\n# for local\r\nresolver = partial(_resolve_single_pattern_locally, base_path)\r\n```", "something like this maybe (with Quentin's reimplementation of `HfFilesystem.find`)?\r\n\r\n ```\r\n @lru_cache(max_size=None)\r\n def _find(self, path, maxdepth=None, withdirs=False, detail=False, **kwargs):\r\n```\r\n\r\nIn any case please let me know if I can help in any way!" ]
https://api.github.com/repos/huggingface/datasets/issues/1102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1102/comments
https://api.github.com/repos/huggingface/datasets/issues/1102/events
https://github.com/huggingface/datasets/issues/1102
757,016,515
MDU6SXNzdWU3NTcwMTY1MTU=
1,102
Add retries to download manager
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
0
2020-12-04T11:08:11Z
2020-12-22T15:34:06Z
2020-12-22T15:34:06Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1102/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1102/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/3727
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3727/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3727/comments
https://api.github.com/repos/huggingface/datasets/issues/3727/events
https://github.com/huggingface/datasets/pull/3727
1,138,979,732
PR_kwDODunzps4y34JN
3,727
Patch all module attributes in its namespace
[]
closed
false
null
0
2022-02-15T17:12:27Z
2022-02-17T17:06:18Z
2022-02-17T17:06:17Z
null
When patching module attributes, only those defined in its `__all__` variable were considered by default (only falling back to `__dict__` if `__all__` was None). However those are only a subset of all the module attributes in its namespace (`__dict__` variable). This PR fixes the problem of modules that have non-None `__all__` variable, but try to access an attribute present in `__dict__` (and not in `__all__`). For example, `pandas` has attribute `__version__` only present in `__dict__`. - Before version 1.4, pandas `__all__` was None, thus all attributes in `__dict__` were patched - From version 1.4, pandas `__all__` is not None, thus attributes in `__dict__` not present in `__all__` are ignored Fix #3724. CC: @severo @lvwerra
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3727/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3727/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3727.diff", "html_url": "https://github.com/huggingface/datasets/pull/3727", "merged_at": "2022-02-17T17:06:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/3727.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3727" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/538
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/538/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/538/comments
https://api.github.com/repos/huggingface/datasets/issues/538/events
https://github.com/huggingface/datasets/pull/538
688,015,912
MDExOlB1bGxSZXF1ZXN0NDc1MzU3MjY2
538
[logging] Add centralized logging - Bump-up cache loads to warnings
[]
closed
false
null
0
2020-08-28T11:42:29Z
2020-08-31T11:42:51Z
2020-08-31T11:42:51Z
null
Add a `nlp.logging` module to set the global logging level easily. The verbosity level also controls the tqdm bars (disabled when set higher than INFO). You can use: ``` nlp.logging.set_verbosity(verbosity: int) nlp.logging.set_verbosity_info() nlp.logging.set_verbosity_warning() nlp.logging.set_verbosity_debug() nlp.logging.set_verbosity_error() nlp.logging.get_verbosity() -> int ``` And use the levels: ``` nlp.logging.CRITICAL nlp.logging.DEBUG nlp.logging.ERROR nlp.logging.FATAL nlp.logging.INFO nlp.logging.NOTSET nlp.logging.WARN nlp.logging.WARNING ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/538/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/538/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/538.diff", "html_url": "https://github.com/huggingface/datasets/pull/538", "merged_at": "2020-08-31T11:42:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/538.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/538" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5762
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5762/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5762/comments
https://api.github.com/repos/huggingface/datasets/issues/5762/events
https://github.com/huggingface/datasets/issues/5762
1,670,326,470
I_kwDODunzps5jjyjG
5,762
Not able to load the pile
[]
closed
false
null
1
2023-04-17T03:09:10Z
2023-04-17T09:37:27Z
2023-04-17T09:37:27Z
null
### Describe the bug Got this error when I am trying to load the pile dataset ``` TypeError: Couldn't cast array of type struct<file: string, id: string> to {'id': Value(dtype='string', id=None)} ``` ### Steps to reproduce the bug Please visit the following sample notebook https://colab.research.google.com/drive/1JHcjawcHL6QHhi5VcqYd07W2QCEj2nWK#scrollTo=ulJP3eJCI-tB ### Expected behavior The pile should work ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.4 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5762/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5762/timeline
null
completed
null
null
false
[ "Thanks for reporting, @surya-narayanan.\r\n\r\nI see you already started a discussion about this on the Community tab of the corresponding dataset: https://huggingface.co/datasets/EleutherAI/the_pile/discussions/10\r\nLet's continue the discussion there!" ]
https://api.github.com/repos/huggingface/datasets/issues/3549
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3549/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3549/comments
https://api.github.com/repos/huggingface/datasets/issues/3549/events
https://github.com/huggingface/datasets/pull/3549
1,096,426,996
PR_kwDODunzps4wqkGt
3,549
Fix sem_eval_2018_task_1 download location
[]
closed
false
null
2
2022-01-07T15:37:52Z
2022-01-27T15:52:03Z
2022-01-27T15:52:03Z
null
This changes the download location of sem_eval_2018_task_1 files to include the test set labels as discussed in https://github.com/huggingface/datasets/issues/2745#issuecomment-954588500_ with @lhoestq.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3549/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3549/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3549.diff", "html_url": "https://github.com/huggingface/datasets/pull/3549", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3549.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3549" }
true
[ "Hi ! Thanks for pushing this :)\r\n\r\nIt seems that you created this PR from an old version of `datasets` that didn't have the sem_eval_2018_task_1.py file.\r\n\r\nCan you try merging `master` into your branch ? Or re-create your PR from a branch that comes from a more recent version of `datasets` ?\r\n\r\nAnd sorry for the late response !", "Hi! No problem! I made the new branch like you said and opened https://github.com/huggingface/datasets/pull/3643 for it. I will close this one." ]
https://api.github.com/repos/huggingface/datasets/issues/3125
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3125/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3125/comments
https://api.github.com/repos/huggingface/datasets/issues/3125/events
https://github.com/huggingface/datasets/pull/3125
1,032,046,666
PR_kwDODunzps4teNPC
3,125
Add SLR83 to OpenSLR
[]
closed
false
null
0
2021-10-21T04:26:00Z
2021-10-22T20:10:05Z
2021-10-22T08:30:22Z
null
The PR resolves #3119, adding SLR83 (UK and Ireland dialects) to the previously created OpenSLR dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3125/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3125/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3125.diff", "html_url": "https://github.com/huggingface/datasets/pull/3125", "merged_at": "2021-10-22T08:30:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/3125.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3125" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/310
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/310/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/310/comments
https://api.github.com/repos/huggingface/datasets/issues/310/events
https://github.com/huggingface/datasets/pull/310
644,806,720
MDExOlB1bGxSZXF1ZXN0NDM5MzY1MDg5
310
add wikisql
[]
closed
false
null
1
2020-06-24T18:00:35Z
2020-06-25T12:32:25Z
2020-06-25T12:32:25Z
null
Adding the [WikiSQL](https://github.com/salesforce/WikiSQL) dataset. Interesting things to note: - Have copied the function (`_convert_to_human_readable`) which converts the SQL query to a human-readable (string) format as this is what most people will want when actually using this dataset for NLP applications. - `conds` was originally a tuple but is converted to a dictionary to support differing types. Would be nice to add the logical_form metrics too at some point.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/310/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/310/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/310.diff", "html_url": "https://github.com/huggingface/datasets/pull/310", "merged_at": "2020-06-25T12:32:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/310.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/310" }
true
[ "That's great work @ghomasHudson !" ]
https://api.github.com/repos/huggingface/datasets/issues/5046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5046/comments
https://api.github.com/repos/huggingface/datasets/issues/5046/events
https://github.com/huggingface/datasets/issues/5046
1,391,372,519
I_kwDODunzps5S7qjn
5,046
Audiofolder creates empty Dataset if files same level as metadata
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" }, { "color": "DF8D62", "default": false, "description": "", "id": 4614514401, "name": "hacktoberfest", "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest" } ]
closed
false
null
5
2022-09-29T19:17:23Z
2022-10-28T13:05:07Z
2022-10-28T13:05:07Z
null
## Describe the bug When audio files are at the same level as the metadata (`metadata.csv` or `metadata.jsonl` ), the `load_dataset` returns a `DatasetDict` with no rows but the correct columns. https://github.com/huggingface/datasets/blob/1ea4d091b7a4b83a85b2eeb8df65115d39af3766/docs/source/audio_dataset.mdx?plain=1#L88 ## Steps to reproduce the bug `metadata.csv`: ```csv file_name,duration,transcription ./2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav,10.768,hello ``` ```python >>> audio_dataset = load_dataset("audiofolder", data_dir="/audio-data/") >>> audio_dataset DatasetDict({ train: Dataset({ features: ['audio', 'duration', 'transcription'], num_rows: 0 }) validation: Dataset({ features: ['audio', 'duration', 'transcription'], num_rows: 0 }) }) ``` I've tried, with no success,: - setting `split` to something else so I don't get a `DatasetDict`, - removing the `./`, - using `.jsonl`. ## Expected results ``` Dataset({ features: ['audio', 'duration', 'transcription'], num_rows: 1 }) ``` ## Actual results ``` DatasetDict({ train: Dataset({ features: ['audio', 'duration', 'transcription'], num_rows: 0 }) validation: Dataset({ features: ['audio', 'duration', 'transcription'], num_rows: 0 }) }) ``` ## Environment info - `datasets` version: 2.5.1 - Platform: Linux-5.13.0-1025-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 9.0.0 - Pandas version: 1.5.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5046/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5046/timeline
null
completed
null
null
false
[ "Hi! Unfortunately, I can't reproduce this behavior. Instead, I get `ValueError: audio at 2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav doesn't have metadata in /audio-data/metadata.csv`, which can be fixed by removing the `./` from the file name.\r\n\r\n(Link to a Colab that tries to reproduce this behavior: https://colab.research.google.com/drive/1IhQzULYi0Van1xLrN_SddBX1JF7mLZZK?usp=sharing)", "I think we can make the file name matching part more robust by replacing `file_name` with `os.path.normpath(file_name)`, to ignore \"./\" among other things, in these two places:\r\n* https://github.com/huggingface/datasets/blob/85cd129bde605cd9acacdff0d065fc02e39e09b1/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L319\r\n* https://github.com/huggingface/datasets/blob/85cd129bde605cd9acacdff0d065fc02e39e09b1/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L388", "@mariosasko Some tests failed (see my PR). Any thoughts on that?", "Yes, I mentioned the solution in my review.", "I realized what I was doing wrong.\r\n\r\nThe documentation puts the files in a subfolder.\r\nOnce I have done that, it worked.\r\n\r\nBut l agree that this should be handled better if possible." ]
https://api.github.com/repos/huggingface/datasets/issues/3690
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3690/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3690/comments
https://api.github.com/repos/huggingface/datasets/issues/3690/events
https://github.com/huggingface/datasets/pull/3690
1,127,493,538
PR_kwDODunzps4yP2p5
3,690
Update docs to new frontend/UI
[]
closed
false
null
17
2022-02-08T16:38:09Z
2022-03-03T20:04:21Z
2022-03-03T20:04:20Z
null
### TLDR: Update `datasets` `docs` to the new syntax (markdown and mdx files) & frontend (as how it looks on [hf.co/transformers](https://huggingface.co/docs/transformers/index)) | Light mode | Dark mode | |-----------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------| | <img width="400" alt="Screenshot 2022-02-17 at 14 15 34" src="https://user-images.githubusercontent.com/11827707/154489358-e2fb3708-8d72-4fb6-93f0-51d4880321c0.png"> | <img width="400" alt="Screenshot 2022-02-17 at 14 16 27" src="https://user-images.githubusercontent.com/11827707/154489596-c5a1311b-181c-4341-adb3-d60a7d3abe85.png"> | ## Checklist - [x] update datasets docs to new syntax (should call `doc-builder convert`) (this PR) - [x] discuss `@property` methods frontend https://github.com/huggingface/doc-builder/pull/87 - [x] discuss `inject_arrow_table_documentation` (this PR) https://github.com/huggingface/datasets/pull/3690#discussion_r801847860 - [x] update datasets docs path on moon-landing https://github.com/huggingface/moon-landing/pull/2089 - [x] convert pyarrow docstring from Numpydoc style to groups style https://github.com/huggingface/doc-builder/pull/89(https://stackoverflow.com/a/24385103/6558628) - [x] handle `Raises` section on frontend and doc-builder https://github.com/huggingface/doc-builder/pull/86 - [x] check imgs path (this PR) (nothing to update here) - [x] doc exaples block has to follow format `Examples::` https://github.com/huggingface/datasets/pull/3693 - [x] fix [this docstring](https://github.com/huggingface/datasets/blob/6ed6ac9448311930557810383d2cfd4fe6aae269/src/datasets/arrow_dataset.py#L3339) (causing svelte compilation error) - [x] Delete sphinx related files - [x] Delete sphinx CI - [x] Update docs config in setup.py - [x] add `versions.yml` in doc-build https://github.com/huggingface/doc-build/pull/1 - [x] add `versions.yml` in doc-build-dev https://github.com/huggingface/doc-build-dev/pull/1 - [x] https://github.com/huggingface/moon-landing/pull/2089 - [x] format docstrings for example `datasets.DatasetBuilder.download_and_prepare` args format look wrong - [x] create new github actions. (can probably be in a separate PR) (see the transformers equivalents below) 1. [build_dev_documentation.yml](https://github.com/huggingface/transformers/blob/master/.github/workflows/build_dev_documentation.yml) 2. [build_documentation.yml](https://github.com/huggingface/transformers/blob/master/.github/workflows/build_documentation.yml) 3. [delete_dev_documentation.yml](https://github.com/huggingface/transformers/blob/master/.github/workflows/delete_dev_documentation.yml) ## Note to reviewers The number of changed files is a lot (100+) because I've converted all `.rst` files to `.mdx` files & they are compiling fine on the svelte side (also, moved all the imgs to to [doc-imgs repo](https://huggingface.co/datasets/huggingface/documentation-images/tree/main/datasets)). Moreover, you should just review them on preprod and see if the rendering look fine. _Therefore, I'd suggest to focus on the changed_ **`.py`** and **CI files** (github workflows, etc. you can use [this filter here](https://github.com/huggingface/datasets/pull/3690/files?file-filters%5B%5D=.py&file-filters%5B%5D=.yml&show-deleted-files=true&show-viewed-files=true)) during the review & ignore `.mdx` files. (if there's a bug in `.mdx` files, we can always handle it in a separate PR afterwards).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 4, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/3690/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3690/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3690.diff", "html_url": "https://github.com/huggingface/datasets/pull/3690", "merged_at": "2022-03-03T20:04:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/3690.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3690" }
true
[ "We can have the docstrings of the properties that are missing docstrings (from discussion [here](https://github.com/huggingface/doc-builder/pull/96)) here by using your new `inject_arrow_table_documentation` onthem as well ?", "@sgugger & @lhoestq could you help me with what should the `docs` section in setup.py be changed to [here](https://github.com/huggingface/datasets/blob/master/setup.py#L212-L227) ?\r\n\r\nas a reference, here is a transformers setup.py docs [section](https://github.com/huggingface/transformers/blob/master/setup.py#L304-L308)", "For now, you can put an empty list. Once the `doc-builder` is in a PyPi package (with the bug we fixed on Datasets but still waiting on the standing PR with the code switch) we can put it there.", "None of those dependencies are needed from this list?\r\n\r\n```py\r\n \"docs\": [\r\n \"docutils==0.16.0\",\r\n \"recommonmark\",\r\n \"sphinx==3.1.2\",\r\n \"sphinx-markdown-tables\",\r\n \"sphinx-rtd-theme==0.4.3\",\r\n \"sphinxext-opengraph==0.4.1\",\r\n \"sphinx-copybutton\",\r\n \"fsspec<2021.9.0\",\r\n \"s3fs\",\r\n \"sphinx-panels\",\r\n \"sphinx-inline-tabs\",\r\n \"myst-parser\",\r\n \"Markdown!=3.3.5\",\r\n ],\r\n```", "No, that was all for sphinx. The only thing needed to build the doc is a pip install of `doc-builder` (only from git right now).", "@lhoestq feel free to request reviews from other maintainers 😊", "Thanks ! @mariosasko and @albertvillanova feel free to take a look :)\r\nI can do a thorough review this afternoon", "Cool thanks ! Feel free to merge master into this branch and run `make style` to fix the python code formatting", "Love the colorful vibes here!\r\n![Screen Shot 2022-02-22 at 9 54 17 AM](https://user-images.githubusercontent.com/59462357/155193444-45e639dc-79cd-463c-98ad-1d44a6d6d385.png) ", "I just fixed the conflicts with the `master` branch :)\r\n\r\nCould you update preprod please ? Or is there a preview somewhere I can check to make sure everything is ok ?", "> Could you update preprod please ? Or is there a preview somewhere I can check to make sure everything is ok ?\r\n\r\nI'll let you know once preprod gets updated", "@lhoestq @stevhliu updated [preprod](https://moon-preprod.huggingface.co/docs/datasets/index) with the latest; please let e know if you see any errors", "One more tiny error that doesn't seem specific to Datasets (Transformers example [here](https://huggingface.co/docs/transformers/multilingual#xlm-language-embeddings)), but apostrophes and symbols aren't properly displayed in the right navbar:\r\n\r\n![Screen Shot 2022-03-02 at 8 39 10 AM](https://user-images.githubusercontent.com/59462357/156406988-27e79533-b02a-4fc2-af32-8ad84657488f.png)", "In the latest commit https://github.com/huggingface/datasets/pull/3690/commits/20bddf28b22798c309e6eb1198a716f055889e1b, I tried to reflect changes from https://github.com/huggingface/transformers/pull/15903 , however, the gh workflow is not being triggered. @lhoestq do you know why it might be the case?\r\n\r\neve though, we have \r\nhttps://github.com/huggingface/datasets/blob/20bddf28b22798c309e6eb1198a716f055889e1b/.github/workflows/build_dev_documentation.yml#L3-L7", "I removed this line to trigger the job\r\n```\r\n pull_request:\r\n```\r\n\r\nbut got this error\r\n```\r\n[Error: .github#L1](https://github.com/huggingface/datasets/commit/033fe623c556b9dbc964708b672ff9bb4896c906#annotation_2897984435)\r\na step cannot have both the `uses` and `run` keys\r\n```", "It seems to be running again, and I re-added the line I removed.\r\n\r\nNow the error is\r\n```\r\n> Run cd doc-build-dev && ...\r\nREADME.md\r\ndatasets\r\ntransformers\r\nOn branch main\r\nYour branch is up to date with 'origin/main'.\r\n\r\nnothing to commit, working tree clean\r\nError: Process completed with exit code 1.\r\n```", "@lhoestq if the CI passes, Im gonna merge this PR\r\nplease let me know if that sounds good" ]
https://api.github.com/repos/huggingface/datasets/issues/3936
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3936/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3936/comments
https://api.github.com/repos/huggingface/datasets/issues/3936/events
https://github.com/huggingface/datasets/pull/3936
1,170,713,473
PR_kwDODunzps40hE-P
3,936
Fix Wikipedia version and re-add tests
[]
closed
false
null
1
2022-03-16T08:48:04Z
2022-03-16T17:04:07Z
2022-03-16T17:04:05Z
null
To keep backward compatibility when loading using "wikipedia" dataset ID (https://huggingface.co/datasets/wikipedia), we have created the pre-processed data for the same languages we were offering before, but with updated date "20220301": - de - en - fr - frr - it - simple These pre-processed data can be accessed, e.g.: ```python ds = load_dataset("wikipedia", "20220301.frr", split="train") ``` The next step will be to offer the pre-processed data for many other languages, but when loading using "wikimedia/wikipedia": https://huggingface.co/datasets/wikimedia/wikipedia
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3936/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3936/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3936.diff", "html_url": "https://github.com/huggingface/datasets/pull/3936", "merged_at": "2022-03-16T17:04:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/3936.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3936" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3936). All of your documentation changes will be reflected on that endpoint." ]
https://api.github.com/repos/huggingface/datasets/issues/2659
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2659/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2659/comments
https://api.github.com/repos/huggingface/datasets/issues/2659/events
https://github.com/huggingface/datasets/pull/2659
946,155,407
MDExOlB1bGxSZXF1ZXN0NjkxMzcwNzU3
2,659
Allow dataset config kwargs to be None
[]
closed
false
null
0
2021-07-16T10:25:38Z
2021-07-16T12:46:07Z
2021-07-16T12:46:07Z
null
Close https://github.com/huggingface/datasets/issues/2658 The dataset config kwargs that were set to None we simply ignored. This was an issue when None has some meaning for certain parameters of certain builders, like the `sep` parameter of the "csv" builder that allows to infer to separator. cc @SBrandeis
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2659/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2659/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2659.diff", "html_url": "https://github.com/huggingface/datasets/pull/2659", "merged_at": "2021-07-16T12:46:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/2659.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2659" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1992
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1992/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1992/comments
https://api.github.com/repos/huggingface/datasets/issues/1992/events
https://github.com/huggingface/datasets/issues/1992
822,672,238
MDU6SXNzdWU4MjI2NzIyMzg=
1,992
`datasets.map` multi processing much slower than single processing
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
13
2021-03-05T02:10:02Z
2023-06-08T12:31:55Z
null
null
Hi, thank you for the great library. I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G. My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tokenizer. I noticed that `map` function with `num_proc=mp.cpu_count() //2` takes more than 20 hours to finish the job where as `num_proc=1` gets the job done in about 5 hours. The machine I used has 40 cores, with 126G of RAM. There were no other jobs when `map` function was running. What could be the reason? I would be happy to provide information necessary to spot the reason. p.s. I was experiencing the imbalance issue mentioned in [here](https://github.com/huggingface/datasets/issues/610#issuecomment-705177036) when I was using multi processing. p.s.2 When I run `map` with `num_proc=1`, I see one tqdm bar but all the cores are working. When `num_proc=20`, only 20 cores work. ![Screen Shot 2021-03-05 at 11 04 59](https://user-images.githubusercontent.com/29157715/110056895-ef6cf000-7da2-11eb-8307-6698e9fb1ad4.png)
{ "+1": 4, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/1992/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1992/timeline
null
null
null
null
false
[ "Hi @hwijeen, you might want to look at issues #1796 and #1949. I think it could be something related to the I/O operations being performed.", "I see that many people are experiencing the same issue. Is this problem considered an \"official\" bug that is worth a closer look? @lhoestq", "Yes this looks like a bug. On my side I haven't managed to reproduce it but @theo-m has. We'll investigate this !", "Thank you for the reply! I would be happy to follow the discussions related to the issue.\r\nIf you do not mind, could you also give a little more explanation on my p.s.2? I am having a hard time figuring out why the single processing `map` uses all of my cores.\r\n@lhoestq @theo-m ", "Regarding your ps2: It depends what function you pass to `map`.\r\nFor example, fast tokenizers from `transformers` in Rust tokenize texts and parallelize the tokenization over all the cores.", "I am still experiencing this issue with datasets 1.9.0..\r\nHas there been a further investigation? \r\n<img width=\"442\" alt=\"image\" src=\"https://user-images.githubusercontent.com/29157715/126143387-8b5ddca2-a896-4e18-abf7-4fbf62a48b41.png\">\r\n", "Hi. Is there any update on this issue? I am desperately trying to decrease my times, and multiprocessing \"should\" be the solution, but it literally takes 5 times longer.", "Which version of `datasets` are you using ?", "Hi,\r\n\r\nI’m running into the same issue and trying to come up with a simple benchmark. \r\n\r\n# environment info\r\nI have a total of 80 CPUs.\r\n\r\n- `datasets` version: 2.4.0\r\n- Platform: Linux-4.18.0-305.65.1.el8_4.x86_64-x86_64-with-glibc2.28\r\n- Python version: 3.10.4\r\n- PyArrow version: 8.0.0\r\n- Pandas version: 1.4.3\r\n\r\n# How to reproduce\r\n\r\n```py\r\nIn [1]: from datasets import Dataset, set_caching_enabled \r\nIn [2]: import numpy as np \r\nIn [3]: set_caching_enabled(False) \r\nIn [4]: d = Dataset.from_dict({'foo': np.random.randn(1000,256)}) \r\nIn [9]: d.set_format('np')\r\nIn [14]: def sort(array): \r\n ...: np.sort(array) \r\n# multiprocessing disabled\r\nIn [19]: %%timeit \r\n ...: d.map(sort, input_columns='foo') \r\n78.8 ms ± 1.22 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) \r\n# multiprocessing enabled \r\nIn [27]: %%timeit \r\n ...: d.map(sort, input_columns='foo',num_proc=10) \r\n858 ms ± 45.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) \r\n```", "Spawning multiple processes has an overhead. For small datasets the processing is likely to be faster than spawning the processes and passing the data to them.\r\n\r\nEspecially since your dataset is in memory: the data has to be copied to the subprocesses.\r\nOn the other hand, datasets loaded from disk are much faster to reload from a subprocess thanks to memory mapping.", "Thanks for the clarifications! \r\n\r\nIndeed, when saving then loading the above dataset to disk, and increasing the number of rows to 10K or 100K, the performance gap narrows.\r\n\r\n```py\r\n# with 10000 rows\r\nIn [3]: %%timeit\r\n ...: d.map(sort, input_columns='foo')\r\n578 ms ± 5.89 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\r\nIn [4]: %%timeit \r\n ...: d.map(sort, input_columns='foo',num_proc=10) \r\n1.06 s ± 47.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\r\n\r\n# with 100000 rows\r\nIn [6]: %%timeit\r\n ...: d.map(sort, input_columns='foo')\r\n5.8 s ± 25.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\r\nIn [7]: %%timeit\r\n ...: d.map(sort, input_columns='foo',num_proc=10)\r\n7.23 s ± 154 ms per loop (mean ± std. dev. of 7 runs, 1 loop each\r\n```", "any updates on this issue? \r\nI'm using `datasets=2.12.0`. Adding `num_proc` to the mapping function makes it at least 5x slower than using a single process.", "What kind of function are you passing to `map` ? How many CPUs do you have and what did you set for `num_proc` ?" ]
https://api.github.com/repos/huggingface/datasets/issues/1330
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1330/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1330/comments
https://api.github.com/repos/huggingface/datasets/issues/1330/events
https://github.com/huggingface/datasets/pull/1330
759,657,324
MDExOlB1bGxSZXF1ZXN0NTM0NjI0MzMx
1,330
added un_ga dataset
[]
closed
false
null
2
2020-12-08T17:58:38Z
2020-12-14T17:52:34Z
2020-12-14T17:52:34Z
null
Hi :hugs:, This is a PR for [United nations general assembly resolutions: A six-language parallel corpus](http://opus.nlpl.eu/UN.php) dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1330/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1330/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1330.diff", "html_url": "https://github.com/huggingface/datasets/pull/1330", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1330.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1330" }
true
[ "Looks like this PR includes changes about many other files than the ones for un_ga\r\n\r\nCan you create another branch an another PR please ?", "@lhoestq, Thank you for suggestions. I have made the changes and raised the new PR https://github.com/huggingface/datasets/pull/1569. " ]
https://api.github.com/repos/huggingface/datasets/issues/1786
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1786/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1786/comments
https://api.github.com/repos/huggingface/datasets/issues/1786/events
https://github.com/huggingface/datasets/issues/1786
795,462,816
MDU6SXNzdWU3OTU0NjI4MTY=
1,786
How to use split dataset
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
closed
false
null
2
2021-01-27T21:37:47Z
2021-04-23T15:17:39Z
2021-04-23T15:17:39Z
null
![Capture1](https://user-images.githubusercontent.com/78090287/106057436-cb6a1f00-6111-11eb-8c9c-3658065b1fdf.PNG) Hey, I want to split the lambada dataset into corpus, test, train and valid txt files (like penn treebank) but I am not able to achieve this. What I am doing is, executing the lambada.py file in my project but its not giving desired results. Any help will be appreciated!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1786/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1786/timeline
null
completed
null
null
false
[ "By default, all 3 splits will be loaded if you run the following:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"lambada\")\r\nprint(dataset[\"train\"])\r\nprint(dataset[\"valid\"])\r\n\r\n```\r\n\r\nIf you wanted to do load this manually, you could do this:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndata_files = {\r\n \"train\": \"data/lambada/train.txt\",\r\n \"valid\": \"data/lambada/valid.txt\",\r\n \"test\": \"data/lambada/test.txt\",\r\n}\r\nds = load_dataset(\"text\", data_files=data_files)\r\n```", "Thank you for the quick response! " ]
https://api.github.com/repos/huggingface/datasets/issues/5313
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5313/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5313/comments
https://api.github.com/repos/huggingface/datasets/issues/5313/events
https://github.com/huggingface/datasets/pull/5313
1,468,484,136
PR_kwDODunzps5D6Qfb
5,313
Fix description of streaming in the docs
[]
closed
false
null
1
2022-11-29T18:00:28Z
2022-12-01T14:55:30Z
2022-12-01T14:00:34Z
null
We say that "the data is being downloaded progressively" which is not true, it's just streamed, so I fixed it. Probably I missed some other places where it is written? Also changed docstrings for `StreamingDownloadManager`'s `download` and `extract` to reflect the same, as these docstrings are displayed in the documentation cc @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5313/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5313/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5313.diff", "html_url": "https://github.com/huggingface/datasets/pull/5313", "merged_at": "2022-12-01T14:00:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/5313.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5313" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/1814
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1814/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1814/comments
https://api.github.com/repos/huggingface/datasets/issues/1814/events
https://github.com/huggingface/datasets/pull/1814
800,516,236
MDExOlB1bGxSZXF1ZXN0NTY2OTg4NTI1
1,814
Add Freebase QA Dataset
[]
closed
false
null
1
2021-02-03T16:57:49Z
2021-02-04T19:47:51Z
2021-02-04T16:21:48Z
null
Closes PR #1435. Fixed issues with PR #1809. Requesting @lhoestq to review.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1814/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1814/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1814.diff", "html_url": "https://github.com/huggingface/datasets/pull/1814", "merged_at": "2021-02-04T16:21:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/1814.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1814" }
true
[ "Hi @lhoestq \r\n\r\nThanks for approving. Request you to close PR #1435 as well." ]
https://api.github.com/repos/huggingface/datasets/issues/4587
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4587/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4587/comments
https://api.github.com/repos/huggingface/datasets/issues/4587/events
https://github.com/huggingface/datasets/pull/4587
1,287,291,494
PR_kwDODunzps46flzR
4,587
Validate new_fingerprint passed by user
[]
closed
false
null
1
2022-06-28T12:46:21Z
2022-06-28T14:11:57Z
2022-06-28T14:00:44Z
null
Users can pass the dataset fingerprint they want in `map` and other dataset transforms. However the fingerprint is used to name cache files so we need to make sure it doesn't contain bad characters as mentioned in https://github.com/huggingface/datasets/issues/1718, and that it's not too long
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4587/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4587/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4587.diff", "html_url": "https://github.com/huggingface/datasets/pull/4587", "merged_at": "2022-06-28T14:00:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/4587.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4587" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/2183
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2183/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2183/comments
https://api.github.com/repos/huggingface/datasets/issues/2183/events
https://github.com/huggingface/datasets/pull/2183
852,518,411
MDExOlB1bGxSZXF1ZXN0NjEwNzU3MjUz
2,183
Fix s3fs tests for py36 and py37+
[]
closed
false
null
0
2021-04-07T15:17:11Z
2021-04-08T08:54:45Z
2021-04-08T08:54:44Z
null
Recently several changes happened: 1. latest versions of `fsspec` require python>3.7 for async features 2. `s3fs` added a dependency on `aiobotocore`, which is not compatible with the `moto` s3 mock context manager This PR fixes both issues, by pinning `fsspec` and `s3fs` for python 3.6, and by using `moto` in server mode to support running the tests on python>=3.7 with the latest version of `fsspec` and `s3fs`. cc @philschmid
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2183/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2183/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2183.diff", "html_url": "https://github.com/huggingface/datasets/pull/2183", "merged_at": "2021-04-08T08:54:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/2183.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2183" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3966
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3966/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3966/comments
https://api.github.com/repos/huggingface/datasets/issues/3966/events
https://github.com/huggingface/datasets/pull/3966
1,173,883,084
PR_kwDODunzps40rBNE
3,966
Create metric card for BERTScore
[]
closed
false
null
1
2022-03-18T18:21:56Z
2022-03-22T13:35:28Z
2022-03-22T13:30:56Z
null
Proposing a metric card for BERTScore
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3966/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3966/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3966.diff", "html_url": "https://github.com/huggingface/datasets/pull/3966", "merged_at": "2022-03-22T13:30:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/3966.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3966" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/1012
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1012/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1012/comments
https://api.github.com/repos/huggingface/datasets/issues/1012/events
https://github.com/huggingface/datasets/pull/1012
755,485,658
MDExOlB1bGxSZXF1ZXN0NTMxMTg3MTI2
1,012
Adding Evidence Inference Data:
[]
closed
false
null
0
2020-12-02T17:51:35Z
2020-12-03T15:04:46Z
2020-12-03T15:04:46Z
null
http://evidence-inference.ebm-nlp.com/download/ https://arxiv.org/pdf/2005.04177.pdf
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1012/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1012/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1012.diff", "html_url": "https://github.com/huggingface/datasets/pull/1012", "merged_at": "2020-12-03T15:04:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/1012.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1012" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3220
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3220/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3220/comments
https://api.github.com/repos/huggingface/datasets/issues/3220/events
https://github.com/huggingface/datasets/issues/3220
1,045,549,029
I_kwDODunzps4-Uc_l
3,220
Add documentation about dataset viewer feature
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
0
2021-11-05T08:11:19Z
2021-11-05T08:11:19Z
null
null
Add to the docs more details about the dataset viewer feature in the Hub. CC: @julien-c
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3220/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3220/timeline
null
null
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/3252
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3252/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3252/comments
https://api.github.com/repos/huggingface/datasets/issues/3252/events
https://github.com/huggingface/datasets/pull/3252
1,051,124,749
PR_kwDODunzps4uagoy
3,252
Fix failing CER metric test in CI after update
[]
closed
false
null
0
2021-11-11T15:57:16Z
2021-11-12T14:06:44Z
2021-11-12T14:06:43Z
null
Fixes the [failing CER metric test](https://app.circleci.com/pipelines/github/huggingface/datasets/8644/workflows/79816553-fa2f-4756-b022-d5937f00bf7b/jobs/53298) in CI by adding support for `jiwer==2.3.0`, which was released yesterday. Also, I verified that all the tests in `metrics/cer/test_cer.py` pass after the change, so the results should be the same irrespective of the `jiwer` version.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3252/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3252/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3252.diff", "html_url": "https://github.com/huggingface/datasets/pull/3252", "merged_at": "2021-11-12T14:06:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/3252.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3252" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5658
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5658/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5658/comments
https://api.github.com/repos/huggingface/datasets/issues/5658/events
https://github.com/huggingface/datasets/pull/5658
1,634,867,204
PR_kwDODunzps5MmJe0
5,658
docs: Update num_shards docs to mention num_proc on Dataset and DatasetDict
[]
closed
false
null
2
2023-03-22T00:12:18Z
2023-03-24T16:43:34Z
2023-03-24T16:36:21Z
null
Closes #5653 @mariosasko
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5658/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5658/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5658.diff", "html_url": "https://github.com/huggingface/datasets/pull/5658", "merged_at": "2023-03-24T16:36:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/5658.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5658" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007351 / 0.011353 (-0.004002) | 0.005025 / 0.011008 (-0.005983) | 0.095978 / 0.038508 (0.057470) | 0.033486 / 0.023109 (0.010377) | 0.294427 / 0.275898 (0.018529) | 0.325157 / 0.323480 (0.001677) | 0.005671 / 0.007986 (-0.002315) | 0.005284 / 0.004328 (0.000955) | 0.073159 / 0.004250 (0.068909) | 0.045162 / 0.037052 (0.008110) | 0.294004 / 0.258489 (0.035515) | 0.343545 / 0.293841 (0.049704) | 0.036857 / 0.128546 (-0.091689) | 0.012245 / 0.075646 (-0.063401) | 0.332258 / 0.419271 (-0.087014) | 0.051909 / 0.043533 (0.008377) | 0.295701 / 0.255139 (0.040562) | 0.315247 / 0.283200 (0.032048) | 0.102363 / 0.141683 (-0.039320) | 1.441944 / 1.452155 (-0.010211) | 1.527161 / 1.492716 (0.034445) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211769 / 0.018006 (0.193763) | 0.452015 / 0.000490 (0.451525) | 0.004041 / 0.000200 (0.003841) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027396 / 0.037411 (-0.010015) | 0.108318 / 0.014526 (0.093793) | 0.116851 / 0.176557 (-0.059706) | 0.172658 / 0.737135 (-0.564478) | 0.122876 / 0.296338 (-0.173462) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406484 / 0.215209 (0.191275) | 4.053849 / 2.077655 (1.976194) | 1.842947 / 1.504120 (0.338827) | 1.649473 / 1.541195 (0.108278) | 1.728629 / 1.468490 (0.260139) | 0.699519 / 4.584777 (-3.885258) | 3.730823 / 3.745712 (-0.014889) | 2.139624 / 5.269862 (-3.130237) | 1.487839 / 4.565676 (-3.077837) | 0.086699 / 0.424275 (-0.337576) | 0.012815 / 0.007607 (0.005208) | 0.514014 / 0.226044 (0.287969) | 5.153315 / 2.268929 (2.884387) | 2.324431 / 55.444624 (-53.120193) | 1.971533 / 6.876477 (-4.904944) | 2.074480 / 2.142072 (-0.067592) | 0.842419 / 4.805227 (-3.962808) | 0.169140 / 6.500664 (-6.331524) | 0.065206 / 0.075469 (-0.010263) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180887 / 1.841788 (-0.660901) | 14.627401 / 8.074308 (6.553093) | 14.382699 / 10.191392 (4.191307) | 0.143986 / 0.680424 (-0.536438) | 0.017460 / 0.534201 (-0.516741) | 0.422100 / 0.579283 (-0.157183) | 0.417474 / 0.434364 (-0.016890) | 0.493712 / 0.540337 (-0.046625) | 0.589744 / 1.386936 (-0.797193) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007538 / 0.011353 (-0.003815) | 0.005122 / 0.011008 (-0.005887) | 0.073858 / 0.038508 (0.035350) | 0.034561 / 0.023109 (0.011451) | 0.341250 / 0.275898 (0.065352) | 0.373063 / 0.323480 (0.049583) | 0.005785 / 0.007986 (-0.002200) | 0.005393 / 0.004328 (0.001065) | 0.072354 / 0.004250 (0.068104) | 0.047005 / 0.037052 (0.009953) | 0.341179 / 0.258489 (0.082690) | 0.386299 / 0.293841 (0.092458) | 0.038315 / 0.128546 (-0.090231) | 0.012200 / 0.075646 (-0.063446) | 0.086132 / 0.419271 (-0.333140) | 0.049873 / 0.043533 (0.006340) | 0.337985 / 0.255139 (0.082846) | 0.354806 / 0.283200 (0.071607) | 0.103557 / 0.141683 (-0.038126) | 1.445682 / 1.452155 (-0.006473) | 1.551008 / 1.492716 (0.058291) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235873 / 0.018006 (0.217867) | 0.448445 / 0.000490 (0.447955) | 0.001307 / 0.000200 (0.001108) | 0.000087 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029809 / 0.037411 (-0.007603) | 0.108833 / 0.014526 (0.094307) | 0.123289 / 0.176557 (-0.053268) | 0.176516 / 0.737135 (-0.560620) | 0.127186 / 0.296338 (-0.169153) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422037 / 0.215209 (0.206828) | 4.188073 / 2.077655 (2.110418) | 1.999295 / 1.504120 (0.495175) | 1.809229 / 1.541195 (0.268034) | 1.930798 / 1.468490 (0.462308) | 0.694371 / 4.584777 (-3.890406) | 3.833432 / 3.745712 (0.087719) | 3.235600 / 5.269862 (-2.034262) | 1.867822 / 4.565676 (-2.697854) | 0.085734 / 0.424275 (-0.338541) | 0.012727 / 0.007607 (0.005120) | 0.542261 / 0.226044 (0.316217) | 5.289366 / 2.268929 (3.020437) | 2.469636 / 55.444624 (-52.974988) | 2.139392 / 6.876477 (-4.737084) | 2.193305 / 2.142072 (0.051233) | 0.846747 / 4.805227 (-3.958481) | 0.168965 / 6.500664 (-6.331699) | 0.064463 / 0.075469 (-0.011006) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263818 / 1.841788 (-0.577970) | 15.254642 / 8.074308 (7.180334) | 14.428111 / 10.191392 (4.236719) | 0.164770 / 0.680424 (-0.515654) | 0.017476 / 0.534201 (-0.516725) | 0.420198 / 0.579283 (-0.159085) | 0.443250 / 0.434364 (0.008886) | 0.496904 / 0.540337 (-0.043434) | 0.596541 / 1.386936 (-0.790395) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4db8e33eb9cf6cd4453cdfa246c065e0eedf170c \"CML watermark\")\n" ]
https://api.github.com/repos/huggingface/datasets/issues/3324
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3324/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3324/comments
https://api.github.com/repos/huggingface/datasets/issues/3324/events
https://github.com/huggingface/datasets/issues/3324
1,064,661,212
I_kwDODunzps4_dXDc
3,324
Can't import `datasets` in python 3.10
[]
closed
false
null
0
2021-11-26T16:06:14Z
2021-11-26T16:31:23Z
2021-11-26T16:31:23Z
null
When importing `datasets` I'm getting this error in python 3.10: ```python Traceback (most recent call last): File "<string>", line 1, in <module> File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/__init__.py", line 34, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_dataset.py", line 47, in <module> from .arrow_reader import ArrowReader File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_reader.py", line 33, in <module> from .table import InMemoryTable, MemoryMappedTable, Table, concat_tables File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 334, in <module> class InMemoryTable(TableBlock): File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 361, in InMemoryTable def from_pandas(cls, *args, **kwargs): File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 24, in wrapper out = wraps(arrow_table_method)(method) File "/Users/quentinlhoest/.pyenv/versions/3.10.0/lib/python3.10/functools.py", line 61, in update_wrapper wrapper.__wrapped__ = wrapped AttributeError: readonly attribute ``` This makes the conda build fail. I'm opening a PR to fix this and do a patch release 1.16.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3324/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3324/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/3982
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3982/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3982/comments
https://api.github.com/repos/huggingface/datasets/issues/3982/events
https://github.com/huggingface/datasets/pull/3982
1,175,478,099
PR_kwDODunzps40vrR_
3,982
Exclude Google Drive tests of the CI
[]
closed
false
null
2
2022-03-21T14:34:16Z
2022-03-31T16:38:02Z
2022-03-21T14:51:35Z
null
These tests make the CI spam the Google Drive API, the CI now gets banned by Google Drive very often. I think we can just skip these tests from the CI for now. In the future we could have a CI job that runs only once a day or once a week for such cases cc @albertvillanova @mariosasko @severo Close #3415 ![image](https://user-images.githubusercontent.com/42851186/159283608-fdeca1ac-b57f-4fa3-bf09-6fa5361c494f.png)
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/3982/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3982/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3982.diff", "html_url": "https://github.com/huggingface/datasets/pull/3982", "merged_at": "2022-03-21T14:51:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/3982.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3982" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "I was thinking exactly the same: running unit tests that request continuously a third-party API is not a good idea." ]
https://api.github.com/repos/huggingface/datasets/issues/3648
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3648/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3648/comments
https://api.github.com/repos/huggingface/datasets/issues/3648/events
https://github.com/huggingface/datasets/pull/3648
1,117,465,505
PR_kwDODunzps4xvXig
3,648
Fix Windows CI: bump python to 3.7
[]
closed
false
null
0
2022-01-28T14:24:54Z
2022-01-28T14:40:39Z
2022-01-28T14:40:39Z
null
Python>=3.7 is needed to install `tokenizers` 0.11
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3648/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3648/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3648.diff", "html_url": "https://github.com/huggingface/datasets/pull/3648", "merged_at": "2022-01-28T14:40:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/3648.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3648" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1724
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1724/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1724/comments
https://api.github.com/repos/huggingface/datasets/issues/1724/events
https://github.com/huggingface/datasets/issues/1724
784,023,338
MDU6SXNzdWU3ODQwMjMzMzg=
1,724
could not run models on a offline server successfully
[]
closed
false
null
6
2021-01-12T06:08:06Z
2022-10-05T12:39:07Z
2022-10-05T12:39:07Z
null
Hi, I really need your help about this. I am trying to fine-tuning a RoBERTa on a remote server, which is strictly banning internet. I try to install all the packages by hand and try to run run_mlm.py on the server. It works well on colab, but when I try to run it on this offline server, it shows: ![image](https://user-images.githubusercontent.com/49967236/104276256-25a88600-546a-11eb-9776-8ec695dfa24e.png) is there anything I can do? Is it possible to download all the things in cache and upload it to the server? Please help me out...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1724/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1724/timeline
null
completed
null
null
false
[ "Transferred to `datasets` based on the stack trace.", "Hi @lkcao !\r\nYour issue is indeed related to `datasets`. In addition to installing the package manually, you will need to download the `text.py` script on your server. You'll find it (under `datasets/datasets/text`: https://github.com/huggingface/datasets/blob/master/datasets/text/text.py.\r\nThen you can change the line 221 of `run_mlm_new.py` into:\r\n```python\r\n datasets = load_dataset('/path/to/text.py', data_files=data_files)\r\n```\r\nWhere `/path/to/text.py` is the path on the server where you saved the `text.py` script.", "We're working on including the local dataset builders (csv, text, json etc.) directly in the `datasets` package so that they can be used offline", "The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :)\r\nYou can now use them offline\r\n```python\r\ndatasets = load_dataset('text', data_files=data_files)\r\n```\r\n\r\nWe'll do a new release soon", "> The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :)\r\n> You can now use them offline\r\n> \r\n> ```python\r\n> datasets = load_dataset('text', data_files=data_files)\r\n> ```\r\n> \r\n> We'll do a new release soon\r\n\r\nso the new version release now?", "Yes it's been available since datasets 1.3.0 !" ]
https://api.github.com/repos/huggingface/datasets/issues/383
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/383/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/383/comments
https://api.github.com/repos/huggingface/datasets/issues/383/events
https://github.com/huggingface/datasets/pull/383
655,291,201
MDExOlB1bGxSZXF1ZXN0NDQ3ODI0OTky
383
Adding the Linguistic Code-switching Evaluation (LinCE) benchmark
[]
closed
false
null
5
2020-07-11T22:35:20Z
2020-07-16T16:19:46Z
2020-07-16T16:19:46Z
null
Hi, First of all, this library is really cool! Thanks for putting all of this together! This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ): > 1. Why do we need LinCE? >LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details). >Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark. The data comes from social media and here's the summary table of tasks per language pair: | Language Pairs | LID | POS | NER | SA | |----------------------------------------|-----|-----|-----|----| | Spanish-English | ✅ | ✅ | ✅ | ✅ | | Hindi-English | ✅ | ✅ | ✅ | | | Modern Standard Arabic-Egyptian Arabic | ✅ | | ✅ | | | Nepali-English | ✅ | | | | The tasks are as follows: * LID: token-level language identification * POS: part-of-speech tagging * NER: named entity recognition * SA: sentiment analysis With the exception of MSA-EA, the rest of the datasets contain token-level LID labels. ## Usage For Spanish-English LID, we can load the data as follows: ``` import nlp data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng') for split in data: print(data[split]) ``` Here's the output: ``` Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030) Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332) Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289) ``` Here's the list of shortcut names for every dataset available in LinCE: * `lid_spaeng` * `lid_hineng` * `lid_nepeng` * `lid_msaea` * `pos_spaeng` * `pos_hineng` * `ner_spaeng` * `ner_hineng` * `ner_msaea` * `sa_spaeng` All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script. ## Features Here is how the features look in the case of language identification (LID) tasks: | LID Feature | Type | Description | |----------------------|---------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | For part-of-speech (POS) tagging: | POS Feature | Type | Description | |----------------------|---------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | | `pos` | `list<str>` | List of POS tags (string) of a sentence | For named entity recognition (NER): | NER Feature | Type | Description | |----------------------|---------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | | `ner` | `list<str>` | List of NER labels (string) of a sentence | **NOTE**: the MSA-EA NER dataset does not contain the `lid` feature. For sentiment analysis (SA): | SA Feature | Type | Description | |---------------------|-------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | | `sa` | `str` | Sentiment label (string) of a sentence |
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/383/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/383/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/383.diff", "html_url": "https://github.com/huggingface/datasets/pull/383", "merged_at": "2020-07-16T16:19:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/383.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/383" }
true
[ "I am checking the details of the CI log for the failed test, but I don't see how the error relates to the code I added; the error is coming from a config builder different than the `LinceConfig`, and it crashes when `self.config.data_files` because is self.config is None. I would appreciate if someone could help me find out where I could have messed things up :)\r\n\r\nAlso, the real and dummy data tests passed before committing and pushing my changes.\r\n\r\nThanks a lot in advance!\r\n\r\n```\r\n=================================== FAILURES ===================================\r\n____________________ AWSDatasetTest.test_load_dataset_text _____________________\r\n\r\nself = <tests.test_dataset_common.AWSDatasetTest testMethod=test_load_dataset_text>\r\ndataset_name = 'text'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs)\r\n\r\ntests/test_dataset_common.py:243: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:137: in check_load_dataset\r\n try_from_hf_gcs=False,\r\n../.local/lib/python3.6/site-packages/nlp/builder.py:432: in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n../.local/lib/python3.6/site-packages/nlp/builder.py:466: in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <nlp.datasets.text.bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b.text.Text object at 0x7efa744ffb70>\r\ndl_manager = <nlp.utils.mock_download_manager.MockDownloadManager object at 0x7efb304c52b0>\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" The `datafiles` kwarg in load_dataset() can be a str, List[str], Dict[str,str], or Dict[str,List[str]].\r\n \r\n If str or List[str], then the dataset returns only the 'train' split.\r\n If dict, then keys should be from the `nlp.Split` enum.\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n # Handle case with only one split\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={\"files\": files})]\r\n else:\r\n # Handle case with several splits and a dict mapping\r\n splits = []\r\n for split_name in [nlp.Split.TRAIN, nlp.Split.VALIDATION, nlp.Split.TEST]:\r\n> if split_name in self.config.data_files:\r\nE TypeError: argument of type 'NoneType' is not iterable\r\n\r\n../.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.py:24: TypeError\r\n=============================== warnings summary ===============================\r\n... \r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_text\r\n====== 1 failed, 963 passed, 532 skipped, 5 warnings in 166.33s (0:02:46) ======\r\n\r\nExited with code exit status 1\r\n```", "@lhoestq Hi Quentin, I was wondering if you could give some feedback on this error from the `run_dataset_script_tests` script. It seems that's coming from a different config builder than the one I added, so I am not sure why this error would occur. Thanks in advance!", "Awesome! Thank you for all your comments! 👌 I will update the PR in a bit with all the required changes 🙂 \r\n\r\nLet me just provide a bit of context for my changes:\r\n\r\nI was referring to the GLUE, XTREME and WNUT_17 dataset scripts to build mine (not sure if the new documentation was available last week). This is where I took the naming convention for the citation and description variables. Also, these scripts didn't have the `BUILDER_CONFIG_CLASS = LinceConfig` line so I commented this out thinking I didn't need that; I tried this line in my attempts to make the real and dummy data tests pass but it was not helping. \r\n\r\nThe problem I was facing was that the tests were passing a default `BuilderConfig` (i.e., `self.config.name` property was set to `'default'` and my custom properties were not available). This means, for example, that within the `def _info(...)` method, I was not able to access the specific fields of my `LinceConfig` class (which is why I have now a global variable `_LINCE_CITATIONS`, to detach the individual citations from the corresponding LinceConfig objects, as well as I am constructing manually the feature infos). This default `BuilderConfig` is why I added the `if not isinstance(self.config, LinceConfig): return []` statement. Otherwise, accessing custom properties like `self.config.colnames` was failing the test because such properties did not exist in the default config (i.e., it was not a `LinceConfig`).\r\n\r\nI will update the PR and see if these problems happen in the CI tests.\r\n\r\nThanks again for the follow-up! @lhoestq ", "Ok I see !\r\n\r\nTo give you more details: the line `BUILDER_CONFIG_CLASS = LinceConfig` tells the tests how to instantiate a config for this dataset. Therefore if you have this line you should have all the fields of your config available.\r\n\r\nTo fix the errors you get you'll have to, first, have the `BUILDER_CONFIG_CLASS = LinceConfig` line, and second, add default values for the parameters of your config (or the tests functions will be unable to instantiate it by calling `LinceConfig()`.\r\n\r\nAn example of dataset with a custom config with additional filed like this one is [biomrc](https://github.com/huggingface/nlp/blob/master/datasets/biomrc/biomrc.py).\r\nFeel free to give a look at it if you want.", "Thanks for the reference!\r\n\r\nI just updated the PR with the suggested changes. It seems the CI failed on the same test you said we could ignore, so I guess it's okay :) \r\n\r\nPlease let me know if there is something else I may need to change." ]
https://api.github.com/repos/huggingface/datasets/issues/5324
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5324/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5324/comments
https://api.github.com/repos/huggingface/datasets/issues/5324/events
https://github.com/huggingface/datasets/issues/5324
1,471,524,512
I_kwDODunzps5Xta6g
5,324
Fix docstrings and types in documentation that appears on the website
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
open
false
null
2
2022-12-01T15:34:53Z
2022-12-13T19:03:55Z
null
null
While I was working on https://github.com/huggingface/datasets/pull/5313 I've noticed that we have a mess in how we annotate types and format args and return values in the code. And some of it is displayed in the [Reference section](https://huggingface.co/docs/datasets/package_reference/builder_classes) of the documentation on the website. Would be nice someday, maybe before releasing datasets 3.0.0, to unify it......
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5324/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5324/timeline
null
null
null
null
false
[ "I agree we have a mess with docstrings...", "Ok, I believe we've cleaned up most of the old syntax we were using for the user-facing docs! There are still a couple of `:obj:`'s and `:class:` floating around in the docstrings we don't expose that I'll track down :)" ]
https://api.github.com/repos/huggingface/datasets/issues/3376
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3376/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3376/comments
https://api.github.com/repos/huggingface/datasets/issues/3376/events
https://github.com/huggingface/datasets/pull/3376
1,070,522,979
PR_kwDODunzps4vW5sB
3,376
Update clue benchmark
[]
closed
false
null
1
2021-12-03T12:06:01Z
2021-12-08T14:14:42Z
2021-12-08T14:14:41Z
null
Fix #3374
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3376/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3376/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3376.diff", "html_url": "https://github.com/huggingface/datasets/pull/3376", "merged_at": "2021-12-08T14:14:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/3376.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3376" }
true
[ "The CI error is due to missing tags in the CLUE dataset card - merging !" ]
https://api.github.com/repos/huggingface/datasets/issues/4887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4887/comments
https://api.github.com/repos/huggingface/datasets/issues/4887/events
https://github.com/huggingface/datasets/pull/4887
1,349,426,693
PR_kwDODunzps49t_PM
4,887
Add "cc-by-nc-sa-2.0" to list of licenses
[]
closed
false
null
2
2022-08-24T13:11:49Z
2022-08-26T10:31:32Z
2022-08-26T10:29:20Z
null
Datasets side of https://github.com/huggingface/hub-docs/pull/285
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4887/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4887/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4887.diff", "html_url": "https://github.com/huggingface/datasets/pull/4887", "merged_at": "2022-08-26T10:29:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/4887.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4887" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Sorry for the issue @albertvillanova! I think it's now fixed! :heart: " ]
https://api.github.com/repos/huggingface/datasets/issues/976
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/976/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/976/comments
https://api.github.com/repos/huggingface/datasets/issues/976/events
https://github.com/huggingface/datasets/pull/976
754,826,146
MDExOlB1bGxSZXF1ZXN0NTMwNjU1NzM5
976
Arabic pos dialect
[]
closed
false
null
2
2020-12-02T00:21:13Z
2020-12-09T17:30:32Z
2020-12-09T17:30:32Z
null
A README.md and loading script for the Arabic POS Dialect dataset. The README is missing the sections on personal information, biases, and limitations, as it would probably be better for those to be filled by someone who can read the contents of the dataset and is familiar with Arabic NLP.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/976/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/976/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/976.diff", "html_url": "https://github.com/huggingface/datasets/pull/976", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/976.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/976" }
true
[ "looks like this PR includes changes about many other files than the oens for Araboc POS Dialect\r\n\r\nCan you create a another branch and another PR please ?", "Sorry! I'm not sure how I managed to do that. I'll make a new branch." ]
https://api.github.com/repos/huggingface/datasets/issues/5563
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5563/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5563/comments
https://api.github.com/repos/huggingface/datasets/issues/5563/events
https://github.com/huggingface/datasets/pull/5563
1,595,049,025
PR_kwDODunzps5KgtbL
5,563
Release: 2.10.0
[]
closed
false
null
4
2023-02-22T12:48:52Z
2023-02-22T13:05:55Z
2023-02-22T12:56:48Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5563/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5563/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5563.diff", "html_url": "https://github.com/huggingface/datasets/pull/5563", "merged_at": "2023-02-22T12:56:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/5563.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5563" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009437 / 0.011353 (-0.001916) | 0.004999 / 0.011008 (-0.006010) | 0.098839 / 0.038508 (0.060331) | 0.035496 / 0.023109 (0.012386) | 0.300726 / 0.275898 (0.024828) | 0.359793 / 0.323480 (0.036313) | 0.007694 / 0.007986 (-0.000292) | 0.003980 / 0.004328 (-0.000348) | 0.075240 / 0.004250 (0.070989) | 0.041149 / 0.037052 (0.004097) | 0.313185 / 0.258489 (0.054696) | 0.344111 / 0.293841 (0.050270) | 0.037775 / 0.128546 (-0.090772) | 0.011901 / 0.075646 (-0.063745) | 0.332631 / 0.419271 (-0.086641) | 0.047194 / 0.043533 (0.003661) | 0.306902 / 0.255139 (0.051763) | 0.321725 / 0.283200 (0.038525) | 0.101031 / 0.141683 (-0.040652) | 1.458778 / 1.452155 (0.006623) | 1.530196 / 1.492716 (0.037480) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203241 / 0.018006 (0.185235) | 0.447147 / 0.000490 (0.446657) | 0.004159 / 0.000200 (0.003959) | 0.000131 / 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025845 / 0.037411 (-0.011566) | 0.106966 / 0.014526 (0.092440) | 0.115876 / 0.176557 (-0.060681) | 0.179052 / 0.737135 (-0.558084) | 0.123012 / 0.296338 (-0.173327) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408766 / 0.215209 (0.193557) | 4.080400 / 2.077655 (2.002745) | 1.893747 / 1.504120 (0.389627) | 1.709389 / 1.541195 (0.168194) | 1.768071 / 1.468490 (0.299581) | 0.689717 / 4.584777 (-3.895059) | 3.760897 / 3.745712 (0.015185) | 2.017050 / 5.269862 (-3.252811) | 1.333027 / 4.565676 (-3.232650) | 0.083559 / 0.424275 (-0.340716) | 0.011951 / 0.007607 (0.004344) | 0.512313 / 0.226044 (0.286268) | 5.162696 / 2.268929 (2.893767) | 2.418559 / 55.444624 (-53.026065) | 2.110178 / 6.876477 (-4.766299) | 2.113635 / 2.142072 (-0.028437) | 0.835171 / 4.805227 (-3.970056) | 0.164222 / 6.500664 (-6.336442) | 0.061955 / 0.075469 (-0.013515) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198336 / 1.841788 (-0.643452) | 14.531468 / 8.074308 (6.457160) | 13.882133 / 10.191392 (3.690741) | 0.154524 / 0.680424 (-0.525900) | 0.028782 / 0.534201 (-0.505419) | 0.441808 / 0.579283 (-0.137475) | 0.433096 / 0.434364 (-0.001268) | 0.518229 / 0.540337 (-0.022108) | 0.603201 / 1.386936 (-0.783735) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007385 / 0.011353 (-0.003967) | 0.005193 / 0.011008 (-0.005815) | 0.075517 / 0.038508 (0.037009) | 0.033192 / 0.023109 (0.010083) | 0.332299 / 0.275898 (0.056401) | 0.363043 / 0.323480 (0.039563) | 0.006368 / 0.007986 (-0.001617) | 0.004003 / 0.004328 (-0.000326) | 0.073710 / 0.004250 (0.069460) | 0.046916 / 0.037052 (0.009863) | 0.336307 / 0.258489 (0.077818) | 0.384910 / 0.293841 (0.091069) | 0.038132 / 0.128546 (-0.090414) | 0.012283 / 0.075646 (-0.063364) | 0.088036 / 0.419271 (-0.331235) | 0.049699 / 0.043533 (0.006166) | 0.333953 / 0.255139 (0.078814) | 0.352961 / 0.283200 (0.069762) | 0.101905 / 0.141683 (-0.039778) | 1.470480 / 1.452155 (0.018325) | 1.498212 / 1.492716 (0.005496) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275067 / 0.018006 (0.257061) | 0.452589 / 0.000490 (0.452099) | 0.047067 / 0.000200 (0.046867) | 0.000983 / 0.000054 (0.000929) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028649 / 0.037411 (-0.008762) | 0.108385 / 0.014526 (0.093859) | 0.121213 / 0.176557 (-0.055343) | 0.192236 / 0.737135 (-0.544899) | 0.124620 / 0.296338 (-0.171719) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428742 / 0.215209 (0.213533) | 4.264893 / 2.077655 (2.187238) | 2.061650 / 1.504120 (0.557530) | 1.873267 / 1.541195 (0.332072) | 1.961012 / 1.468490 (0.492522) | 0.708904 / 4.584777 (-3.875873) | 3.821289 / 3.745712 (0.075577) | 3.287231 / 5.269862 (-1.982631) | 1.903539 / 4.565676 (-2.662137) | 0.086474 / 0.424275 (-0.337801) | 0.012101 / 0.007607 (0.004494) | 0.531411 / 0.226044 (0.305367) | 5.216785 / 2.268929 (2.947857) | 2.575209 / 55.444624 (-52.869416) | 2.264902 / 6.876477 (-4.611574) | 2.291225 / 2.142072 (0.149153) | 0.853486 / 4.805227 (-3.951741) | 0.168550 / 6.500664 (-6.332114) | 0.064158 / 0.075469 (-0.011311) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.295830 / 1.841788 (-0.545958) | 14.419524 / 8.074308 (6.345216) | 13.397985 / 10.191392 (3.206593) | 0.181367 / 0.680424 (-0.499057) | 0.017666 / 0.534201 (-0.516535) | 0.420645 / 0.579283 (-0.158638) | 0.421025 / 0.434364 (-0.013339) | 0.527369 / 0.540337 (-0.012969) | 0.627175 / 1.386936 (-0.759761) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#589b49dfaffa729bc9997a38d4cedafb107ea2e4 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008717 / 0.011353 (-0.002635) | 0.004573 / 0.011008 (-0.006435) | 0.103660 / 0.038508 (0.065151) | 0.035274 / 0.023109 (0.012165) | 0.298563 / 0.275898 (0.022665) | 0.384397 / 0.323480 (0.060917) | 0.006932 / 0.007986 (-0.001053) | 0.003422 / 0.004328 (-0.000907) | 0.080193 / 0.004250 (0.075943) | 0.039767 / 0.037052 (0.002714) | 0.310296 / 0.258489 (0.051807) | 0.351361 / 0.293841 (0.057520) | 0.033532 / 0.128546 (-0.095014) | 0.011543 / 0.075646 (-0.064104) | 0.374816 / 0.419271 (-0.044456) | 0.046046 / 0.043533 (0.002513) | 0.306918 / 0.255139 (0.051779) | 0.382242 / 0.283200 (0.099042) | 0.098945 / 0.141683 (-0.042738) | 1.456929 / 1.452155 (0.004775) | 1.535763 / 1.492716 (0.043046) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011759 / 0.018006 (-0.006247) | 0.405345 / 0.000490 (0.404855) | 0.002667 / 0.000200 (0.002467) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023924 / 0.037411 (-0.013487) | 0.095537 / 0.014526 (0.081011) | 0.106959 / 0.176557 (-0.069598) | 0.170782 / 0.737135 (-0.566353) | 0.109169 / 0.296338 (-0.187170) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437521 / 0.215209 (0.222312) | 4.383556 / 2.077655 (2.305902) | 2.092055 / 1.504120 (0.587935) | 1.889316 / 1.541195 (0.348121) | 1.937436 / 1.468490 (0.468946) | 0.700175 / 4.584777 (-3.884602) | 3.358107 / 3.745712 (-0.387605) | 3.243226 / 5.269862 (-2.026636) | 1.620497 / 4.565676 (-2.945180) | 0.083063 / 0.424275 (-0.341212) | 0.012970 / 0.007607 (0.005363) | 0.544226 / 0.226044 (0.318181) | 5.483315 / 2.268929 (3.214386) | 2.555183 / 55.444624 (-52.889441) | 2.204230 / 6.876477 (-4.672247) | 2.230551 / 2.142072 (0.088478) | 0.816121 / 4.805227 (-3.989106) | 0.151356 / 6.500664 (-6.349308) | 0.068564 / 0.075469 (-0.006905) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.208420 / 1.841788 (-0.633367) | 13.652597 / 8.074308 (5.578289) | 14.096318 / 10.191392 (3.904926) | 0.154473 / 0.680424 (-0.525951) | 0.028436 / 0.534201 (-0.505765) | 0.399949 / 0.579283 (-0.179334) | 0.398961 / 0.434364 (-0.035403) | 0.488703 / 0.540337 (-0.051634) | 0.572640 / 1.386936 (-0.814296) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006373 / 0.011353 (-0.004979) | 0.004368 / 0.011008 (-0.006640) | 0.076410 / 0.038508 (0.037902) | 0.027055 / 0.023109 (0.003945) | 0.336969 / 0.275898 (0.061071) | 0.374533 / 0.323480 (0.051053) | 0.004781 / 0.007986 (-0.003204) | 0.003317 / 0.004328 (-0.001011) | 0.076099 / 0.004250 (0.071849) | 0.038414 / 0.037052 (0.001361) | 0.339578 / 0.258489 (0.081089) | 0.384138 / 0.293841 (0.090297) | 0.031581 / 0.128546 (-0.096965) | 0.011666 / 0.075646 (-0.063981) | 0.085690 / 0.419271 (-0.333582) | 0.042277 / 0.043533 (-0.001256) | 0.337931 / 0.255139 (0.082792) | 0.365827 / 0.283200 (0.082628) | 0.088713 / 0.141683 (-0.052970) | 1.519789 / 1.452155 (0.067635) | 1.583097 / 1.492716 (0.090381) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223472 / 0.018006 (0.205466) | 0.392474 / 0.000490 (0.391984) | 0.002739 / 0.000200 (0.002539) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024373 / 0.037411 (-0.013038) | 0.099822 / 0.014526 (0.085296) | 0.106128 / 0.176557 (-0.070428) | 0.174688 / 0.737135 (-0.562447) | 0.112660 / 0.296338 (-0.183678) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436317 / 0.215209 (0.221108) | 4.358277 / 2.077655 (2.280622) | 2.089746 / 1.504120 (0.585626) | 1.881040 / 1.541195 (0.339845) | 1.923653 / 1.468490 (0.455163) | 0.698176 / 4.584777 (-3.886601) | 3.346460 / 3.745712 (-0.399252) | 3.301429 / 5.269862 (-1.968433) | 1.391042 / 4.565676 (-3.174634) | 0.083025 / 0.424275 (-0.341250) | 0.012459 / 0.007607 (0.004851) | 0.533011 / 0.226044 (0.306967) | 5.334984 / 2.268929 (3.066056) | 2.534105 / 55.444624 (-52.910520) | 2.206295 / 6.876477 (-4.670181) | 2.231752 / 2.142072 (0.089680) | 0.798650 / 4.805227 (-4.006577) | 0.150070 / 6.500664 (-6.350594) | 0.066898 / 0.075469 (-0.008571) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.310527 / 1.841788 (-0.531261) | 13.920492 / 8.074308 (5.846184) | 13.359382 / 10.191392 (3.167990) | 0.154561 / 0.680424 (-0.525863) | 0.016387 / 0.534201 (-0.517814) | 0.379892 / 0.579283 (-0.199391) | 0.376746 / 0.434364 (-0.057618) | 0.462606 / 0.540337 (-0.077732) | 0.550895 / 1.386936 (-0.836041) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cac733fdaef84cfee92856bd259ce024ec157c91 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009373 / 0.011353 (-0.001980) | 0.005212 / 0.011008 (-0.005797) | 0.099287 / 0.038508 (0.060779) | 0.035175 / 0.023109 (0.012066) | 0.307012 / 0.275898 (0.031114) | 0.335105 / 0.323480 (0.011625) | 0.008006 / 0.007986 (0.000020) | 0.004017 / 0.004328 (-0.000311) | 0.075519 / 0.004250 (0.071269) | 0.040276 / 0.037052 (0.003223) | 0.302615 / 0.258489 (0.044126) | 0.361742 / 0.293841 (0.067901) | 0.038773 / 0.128546 (-0.089773) | 0.011892 / 0.075646 (-0.063754) | 0.334199 / 0.419271 (-0.085073) | 0.048035 / 0.043533 (0.004503) | 0.301361 / 0.255139 (0.046222) | 0.321996 / 0.283200 (0.038796) | 0.101818 / 0.141683 (-0.039865) | 1.442601 / 1.452155 (-0.009554) | 1.530669 / 1.492716 (0.037953) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201470 / 0.018006 (0.183464) | 0.496305 / 0.000490 (0.495815) | 0.003794 / 0.000200 (0.003594) | 0.000149 / 0.000054 (0.000094) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028401 / 0.037411 (-0.009010) | 0.107924 / 0.014526 (0.093398) | 0.121716 / 0.176557 (-0.054840) | 0.187407 / 0.737135 (-0.549728) | 0.124755 / 0.296338 (-0.171583) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395667 / 0.215209 (0.180457) | 3.939079 / 2.077655 (1.861424) | 1.776308 / 1.504120 (0.272188) | 1.583487 / 1.541195 (0.042292) | 1.682957 / 1.468490 (0.214467) | 0.677322 / 4.584777 (-3.907455) | 3.796987 / 3.745712 (0.051275) | 3.406199 / 5.269862 (-1.863663) | 1.905467 / 4.565676 (-2.660210) | 0.083189 / 0.424275 (-0.341086) | 0.012156 / 0.007607 (0.004549) | 0.507078 / 0.226044 (0.281033) | 5.031293 / 2.268929 (2.762365) | 2.228403 / 55.444624 (-53.216221) | 1.885760 / 6.876477 (-4.990717) | 1.962340 / 2.142072 (-0.179732) | 0.824979 / 4.805227 (-3.980248) | 0.162107 / 6.500664 (-6.338557) | 0.062324 / 0.075469 (-0.013145) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.205104 / 1.841788 (-0.636683) | 15.368896 / 8.074308 (7.294588) | 14.757540 / 10.191392 (4.566148) | 0.177544 / 0.680424 (-0.502880) | 0.029097 / 0.534201 (-0.505104) | 0.445252 / 0.579283 (-0.134031) | 0.456521 / 0.434364 (0.022157) | 0.544166 / 0.540337 (0.003829) | 0.640675 / 1.386936 (-0.746261) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007438 / 0.011353 (-0.003914) | 0.005236 / 0.011008 (-0.005772) | 0.075379 / 0.038508 (0.036871) | 0.033274 / 0.023109 (0.010165) | 0.344584 / 0.275898 (0.068686) | 0.372161 / 0.323480 (0.048681) | 0.005914 / 0.007986 (-0.002071) | 0.004176 / 0.004328 (-0.000152) | 0.073311 / 0.004250 (0.069061) | 0.050845 / 0.037052 (0.013793) | 0.338978 / 0.258489 (0.080489) | 0.391563 / 0.293841 (0.097722) | 0.037559 / 0.128546 (-0.090987) | 0.012455 / 0.075646 (-0.063192) | 0.086224 / 0.419271 (-0.333047) | 0.052956 / 0.043533 (0.009423) | 0.338529 / 0.255139 (0.083390) | 0.356752 / 0.283200 (0.073553) | 0.105864 / 0.141683 (-0.035819) | 1.467727 / 1.452155 (0.015572) | 1.588727 / 1.492716 (0.096010) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215959 / 0.018006 (0.197953) | 0.440619 / 0.000490 (0.440129) | 0.000397 / 0.000200 (0.000197) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028855 / 0.037411 (-0.008556) | 0.114239 / 0.014526 (0.099713) | 0.121726 / 0.176557 (-0.054830) | 0.190377 / 0.737135 (-0.546759) | 0.127858 / 0.296338 (-0.168480) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415399 / 0.215209 (0.200190) | 4.159012 / 2.077655 (2.081357) | 1.987593 / 1.504120 (0.483474) | 1.794785 / 1.541195 (0.253591) | 1.924819 / 1.468490 (0.456329) | 0.696082 / 4.584777 (-3.888694) | 3.820461 / 3.745712 (0.074749) | 2.139236 / 5.269862 (-3.130626) | 1.348593 / 4.565676 (-3.217084) | 0.086536 / 0.424275 (-0.337739) | 0.012510 / 0.007607 (0.004902) | 0.518804 / 0.226044 (0.292760) | 5.188659 / 2.268929 (2.919730) | 2.501303 / 55.444624 (-52.943322) | 2.138831 / 6.876477 (-4.737646) | 2.220451 / 2.142072 (0.078378) | 0.836277 / 4.805227 (-3.968950) | 0.170940 / 6.500664 (-6.329724) | 0.067326 / 0.075469 (-0.008143) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.307848 / 1.841788 (-0.533940) | 15.995785 / 8.074308 (7.921477) | 13.646285 / 10.191392 (3.454893) | 0.181120 / 0.680424 (-0.499304) | 0.017500 / 0.534201 (-0.516701) | 0.426697 / 0.579283 (-0.152586) | 0.436702 / 0.434364 (0.002338) | 0.518060 / 0.540337 (-0.022278) | 0.632577 / 1.386936 (-0.754359) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cac733fdaef84cfee92856bd259ce024ec157c91 \"CML watermark\")\n" ]
https://api.github.com/repos/huggingface/datasets/issues/1726
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1726/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1726/comments
https://api.github.com/repos/huggingface/datasets/issues/1726/events
https://github.com/huggingface/datasets/pull/1726
784,336,370
MDExOlB1bGxSZXF1ZXN0NTUzNTQ0ODg4
1,726
Offline loading
[]
closed
false
null
6
2021-01-12T15:21:57Z
2022-02-15T10:32:10Z
2021-01-19T16:42:32Z
null
As discussed in #824 it would be cool to make the library work in offline mode. Currently if there's not internet connection then modules (datasets or metrics) that have already been loaded in the past can't be loaded and it raises a ConnectionError. This is because `prepare_module` fetches online for the latest version of the module. To make it work in offline mode one suggestion was to reload the latest local version of the module. I implemented that and I also raise a warning saying that the module that is loaded is the latest local version. ```python logger.warning( f"Using the latest cached version of the module from {cached_module_path} since it " f"couldn't be found locally at {input_path} or remotely ({error_type_that_prevented_reaching_out_remote_stuff})." ) ``` I added tests to make sure it works as expected and I needed to do a few changes in the code to be able to test things properly. In particular I added a parameter `hf_modules_cache` to `init_dynamic_modules` for testing purposes. It makes it possible to have temporary modules caches for testing. I also added a `offline` context utility that allows to test part of the code by making all the requests fail as if there was no internet. Close #824, close #761.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1726/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1726/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1726.diff", "html_url": "https://github.com/huggingface/datasets/pull/1726", "merged_at": "2021-01-19T16:42:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/1726.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1726" }
true
[ "It's maybe a bit annoying to add but could we maybe have as well a version of the local data loading scripts in the package?\r\nThe `text`, `json`, `csv`. Thinking about people like in #1725 who are expecting to be able to work with local data without downloading anything.\r\n\r\nMaybe we can add them to package_data or something?", "Yes I mentioned this in #824 as well. I'm looking into it", "Alright now `csv`, `json`, `text` and `pandas` are \"packaged datasets\", i.e. they're part of the `datasets` package, which makes them available in offline mode without any change in terms of API:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"csv\", data_files=[\"path/to/data.csv\"])\r\n```\r\n\r\nInstead of loading the dataset script from the module cache, it's loaded from inside the `datasets` package.\r\n\r\nI updated the test to still be able to fetch the dummy data files for those datasets from `datasets/{text|csv|pandas|json}/dummy` in the repo.", "Alright now all test pass :)\r\n(I don't thank you windows)", "LGTM! Since you're getting the local script's last modification date anyways do you think it might be a good idea to show it in the warning?", "> LGTM! Since you're getting the local script's last modification date anyways do you think it might be a good idea to show it in the warning?\r\n\r\nYep good idea. I added the date in the warning. For example `(last modified on Mon Nov 30 11:01:56 2020)`" ]
https://api.github.com/repos/huggingface/datasets/issues/301
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/301/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/301/comments
https://api.github.com/repos/huggingface/datasets/issues/301/events
https://github.com/huggingface/datasets/issues/301
643,763,525
MDU6SXNzdWU2NDM3NjM1MjU=
301
Setting cache_dir gives error on wikipedia download
[]
closed
false
null
2
2020-06-23T11:31:44Z
2020-06-24T07:05:07Z
2020-06-24T07:05:07Z
null
First of all thank you for a super handy library! I'd like to download large files to a specific drive so I set `cache_dir=my_path`. This works fine with e.g. imdb and squad. But on wikipedia I get an error: ``` nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=my_path) ``` ``` OSError Traceback (most recent call last) <ipython-input-2-23551344d7bc> in <module> 1 import nlp ----> 2 nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=path) ~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 522 download_mode=download_mode, 523 ignore_verifications=ignore_verifications, --> 524 save_infos=save_infos, 525 ) 526 ~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 385 with utils.temporary_assignment(self, "_cache_dir", tmp_data_dir): 386 reader = ArrowReader(self._cache_dir, self.info) --> 387 reader.download_from_hf_gcs(self._cache_dir, self._relative_data_dir(with_version=True)) 388 downloaded_info = DatasetInfo.from_directory(self._cache_dir) 389 self.info.update(downloaded_info) ~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/arrow_reader.py in download_from_hf_gcs(self, cache_dir, relative_data_dir) 231 remote_dataset_info = os.path.join(remote_cache_dir, "dataset_info.json") 232 downloaded_dataset_info = cached_path(remote_dataset_info) --> 233 os.rename(downloaded_dataset_info, os.path.join(cache_dir, "dataset_info.json")) 234 if self._info is not None: 235 self._info.update(self._info.from_directory(cache_dir)) OSError: [Errno 18] Invalid cross-device link: '/home/local/NTU/nn/.cache/huggingface/datasets/025fa4fd4f04aaafc9e939260fbc8f0bb190ce14c61310c8ae1ddd1dcb31f88c.9637f367b6711a79ca478be55fe6989b8aea4941b7ef7adc67b89ff403020947' -> '/data/nn/nlp/wikipedia/20200501.de/1.0.0.incomplete/dataset_info.json' ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/301/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/301/timeline
null
completed
null
null
false
[ "Whoops didn't mean to close this one.\r\nI did some changes, could you try to run it from the master branch ?", "Now it works, thanks!" ]
https://api.github.com/repos/huggingface/datasets/issues/3356
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3356/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3356/comments
https://api.github.com/repos/huggingface/datasets/issues/3356/events
https://github.com/huggingface/datasets/pull/3356
1,068,503,932
PR_kwDODunzps4vQQLD
3,356
to_tf_dataset() refactor
[]
closed
false
null
5
2021-12-01T14:54:30Z
2021-12-09T10:26:53Z
2021-12-09T10:26:53Z
null
This is the promised cleanup to `to_tf_dataset()` now that the course is out of the way! The main changes are: - A collator is always required (there was way too much hackiness making things like labels work without it) - Lots of cleanup and a lot of code moved to `_get_output_signature` - Should now handle it gracefully when the data collator adds unexpected columns
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 3, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/3356/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3356/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3356.diff", "html_url": "https://github.com/huggingface/datasets/pull/3356", "merged_at": "2021-12-09T10:26:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/3356.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3356" }
true
[ "Also, please don't merge yet - I need to make sure all the code samples and notebooks have a collate_fn specified, since we're removing the ability for this method to work without one!", "Hi @lhoestq @mariosasko, the other PRs this was depending on in Transformers and huggingface/notebooks are now merged, so this is ready to go. Do you want to take one more look at it, or are you happy at this point?", "The documentation for the method is fine, it doesn't need to be changed, but the tutorial notebook definitely looks a little out of date. Let me see what I can do!", "@lhoestq I rewrote the last bit of the notebook - let me know what you think!", "Cool thank you ! It's much nicer that what we had :)\r\n\r\nI also spotted other things I'd like to update in the notebook (especially the beginning) but it can be fixed later" ]
https://api.github.com/repos/huggingface/datasets/issues/4124
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4124/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4124/comments
https://api.github.com/repos/huggingface/datasets/issues/4124/events
https://github.com/huggingface/datasets/issues/4124
1,196,469,842
I_kwDODunzps5HUK5S
4,124
Image decoding often fails when transforming Image datasets
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
7
2022-04-07T19:17:25Z
2022-04-13T14:01:16Z
2022-04-13T14:01:16Z
null
## Describe the bug When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors. Using a debugger it is easy to see what the problem is, the Image decode invocation does not take place and the resulting image passed around is still raw bytes: ``` [{'bytes': b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00 \x00\x00\x00 \x08\x02\x00\x00\x00\xfc\x18\xed\xa3\x00\x00\x08\x02IDATx\x9cEVIs[\xc7\x11\xeemf\xde\x82\x8d\x80\x08\x89"\xb5V\\\xb6\x94(\xe5\x9f\x90\xca5\x7f$\xa7T\xe5\x9f&9\xd9\x8a\\.\xdb\xa4$J\xa4\x00\x02x\xc0{\xb3t\xe7\x00\xca\x99\xd3\\f\xba\xba\xbf\xa5?|\xfa\xf4\xa2\xeb\xba\xedv\xa3f^\xf8\xd5\x0bY\xb6\x10\xb3\xaaDq\xcd\x83\x87\xdf5\xf3gZ\x1a\x04\x0f\xa0fp\xfa\xe0\xd4\x07?\x9dN\xc4\xb1\x99\xfd\xf2\xcb/\x97\x97\x97H\xa2\xaaf\x16\x82\xaf\xeb\xca{\xbf\xd9l.\xdf\x7f\xfa\xcb_\xff&\x88\x08\x00\x80H\xc0\x80@.;\x0f\x8c@#v\xe3\xe5\xfc\xd1\x9f\xee6q\xbf\xdf\xa6\x14\'\x93\xf1\xc3\xe5\xe3\xd1x\x14c\x8c1\xa5\x1c\x9dsM\xd3\xb4\xed\x08\x89SJ)\xa5\xedv\xbb^\xafNO\x97D\x84Hf .... ``` ## Steps to reproduce the bug ```python from datasets import load_dataset, Dataset import numpy as np # seeded NumPy random number generator for reprodducinble results. rng = np.random.default_rng(seed=0) test_dataset = load_dataset('cifar100', split="test") def preprocess_data(dataset): """ Helper function to pre-process HuggingFace Cifar-100 Dataset to remove fine_label and coarse_label columns and add is_flipped column Args: dataset: HuggingFace CIFAR-100 Dataset Object Returns: new_dataset: A Dataset object with "img" and "is_flipped" columns only """ # remove fine_label and coarse_label columns new_dataset = dataset.remove_columns(['fine_label', 'coarse_label']) # add the column for is_flipped new_dataset = new_dataset.add_column(name="is_flipped", column=np.zeros((len(new_dataset)), dtype=np.uint8)) return new_dataset def generate_flipped_data(example, p=0.5): """ A Dataset mapping function that transforms some of the images up-side-down. If the probability value (p) is 0.5 approximately half the images will be flipped upside-down Args: example: An example from the dataset containing a Python dictionary with "img" and "is_flipped" key-value pair p: the probability of flipping the image up-side-down, Default 0.5 Returns: example: A Dataset object """ # example['img'] = example['img'] if rng.random() > p: # the flip the image and set is_flipped column to 1 example['img'] = example['img'].transpose( 1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM) example['is_flipped'] = 1 return example my_test = preprocess_data(test_dataset) my_test = my_test.map(generate_flipped_data) ``` ## Expected results The dataset should be transformed without problems. ## Actual results ``` /home/rafay/anaconda3/envs/pytorch_new/bin/python /home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py Reusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142) Reusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142) 20%|█▉ | 1999/10000 [00:00<00:01, 5560.44ex/s] Traceback (most recent call last): File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2326, in _map_single writer.write(example) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 441, in write self.write_examples_on_file() File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 399, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 230, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 185, in __arrow_array__ out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True)) File "pyarrow/array.pxi", line 316, in pyarrow.lib.array File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB size=32x32 at 0x7F56AEE61DE0> with type Image: did not recognize Python value type when inferring an Arrow data type During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py", line 55, in <module> my_test = my_test.map(generate_flipped_data) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1953, in map return self._map_single( File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 519, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 486, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/fingerprint.py", line 458, in wrapper out = func(self, *args, **kwargs) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2360, in _map_single writer.finalize() File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 522, in finalize self.write_examples_on_file() File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 399, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 230, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 185, in __arrow_array__ out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True)) File "pyarrow/array.pxi", line 316, in pyarrow.lib.array File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB size=32x32 at 0x7F56AEE61DE0> with type Image: did not recognize Python value type when inferring an Arrow data type Process finished with exit code 1 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux(Fedora 35) - Python version: 3.10 - PyArrow version: 7.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4124/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4124/timeline
null
completed
null
null
false
[ "A quick hack I have found is that we can call the image first before running the transforms and it makes sure the image is decoded before being passed on.\r\n\r\nFor this I just needed to add `example['img'] = example['img']` to the top of my `generate_flipped_data` function, defined above, so that image decode in invoked.\r\n\r\nAfter this minor change this function works:\r\n```python\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping functions that transforms some of the image up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n example['img'] = example['img'] # <<< This is the only change\r\n if rng.random() > p: # the flip the image and set is_flipped column to 1\r\n example['img'] = example['img'].transpose(\r\n 1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM)\r\n example['is_flipped'] = 1\r\n\r\n return example\r\n```", "Hi @RafayAK, thanks for reporting.\r\n\r\nCurrent implementation of the Image feature performs the decoding only if the \"img\" field is accessed by the mapped function.\r\n\r\nIn your original `generate_flipped_data` function:\r\n- it only accesses the \"img\" field (and thus performs decoding) if `rng.random() > p`;\r\n- on the other hand, for the cases where `rng.random() <= p`, the \"img\" field is not accessed and thus no decoding is performed for those examples\r\n\r\nBy adding the code line `example['img'] = example['img']`, you make sure the \"img\" field is accessed in all cases, and the decoding is done for all examples.\r\n\r\nAlso note that there is a little bug in your implementation: `p` is not the probability of flipping, but the probability of not-flipping; the larger is `p`, the smaller is the probability of flipping.\r\n\r\nSome refactoring (fixing also `p`):\r\n```python\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping functions that transforms some of the image up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down.\r\n\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n do_flip = rng.random() < p # Note the \"<\" sign here instead of \">\"\r\n example['img'] = example['img'].transpose(1) if do_flip else example['img'] # Note \"img\" is always accessed\r\n example['is_flipped'] = 1 if do_flip else 0\r\n return example", "@albertvillanova Thanks for letting me know this is intended behavior. The docs are severely lacking on this, if I hadn't posted this here I would have never found out how I'm actually supposed to modify images in a Dataset object.", "@albertvillanova Secondly if you check the error message it shows that around 1999 images were successfully created, I'm pretty sure some of them were also flipped during the process. Back to my main contention, sometimes the decoding takes place other times it fails. \r\n\r\nI suppose to run `map` on any dataset all the examples should be invoked even if on some of them we end up doing nothing, is that right?", "Hi @RafayAK! I've opened a PR with the fix, which adds a fallback to reattempt casting to PyArrow format with a more robust (but more expensive) procedure if the first attempt fails. Feel free to test it by installing `datasets` from the PR branch with the following command:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git@fix-4124\r\n```", "@mariosasko I'll try this right away and report back.", "@mariosasko Thanks a lot for looking into this, now the `map` function at least behaves as one would expect a function to behave. \r\n\r\nLooking forward to exploring Hugging Face more and even contributing 😃.\r\n\r\n```bash\r\n $ conda list | grep datasets\r\ndatasets 2.0.1.dev0 pypi_0 pypi\r\n\r\n```\r\n\r\n```python\r\ndef preprocess_data(dataset):\r\n \"\"\"\r\n Helper funtion to pre-process HuggingFace Cifar-100 Dataset to remove fine_label and coarse_label columns and\r\n add is_flipped column\r\n Args:\r\n dataset: HuggingFace CIFAR-100 Dataset Object\r\n\r\n Returns:\r\n new_dataset: A Dataset object with \"img\" and \"is_flipped\" columns only\r\n\r\n \"\"\"\r\n # remove fine_label and coarse_label columns\r\n new_dataset = dataset.remove_columns(['fine_label', 'coarse_label'])\r\n # add the column for is_flipped\r\n new_dataset = new_dataset.add_column(name=\"is_flipped\", column=np.zeros((len(new_dataset)), dtype=np.uint8))\r\n\r\n return new_dataset\r\n\r\n\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping functions that transforms some of the image up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n # example['img'] = example['img']\r\n if rng.random() > p: # the flip the image and set is_flipped column to 1\r\n example['img'] = example['img'].transpose(\r\n 1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM)\r\n example['is_flipped'] = 1\r\n\r\n return example\r\n\r\nmy_test = preprocess_data(test_dataset)\r\nmy_test = my_test.map(generate_flipped_data)\r\n```\r\n\r\nThe output now show the function was applied successfully:\r\n``` bash\r\n/home/rafay/anaconda3/envs/pytorch_new/bin/python /home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py\r\nDownloading builder script: 5.61kB [00:00, 3.16MB/s] \r\nDownloading metadata: 4.21kB [00:00, 2.56MB/s] \r\nReusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)\r\nReusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)\r\n100%|██████████| 10000/10000 [00:01<00:00, 5149.15ex/s]\r\n```\r\n" ]
https://api.github.com/repos/huggingface/datasets/issues/1240
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1240/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1240/comments
https://api.github.com/repos/huggingface/datasets/issues/1240/events
https://github.com/huggingface/datasets/pull/1240
758,355,523
MDExOlB1bGxSZXF1ZXN0NTMzNTQxNjk5
1,240
Multi Domain Sentiment Analysis Dataset (MDSA)
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
9
2020-12-07T09:57:15Z
2022-10-03T09:39:43Z
2022-10-03T09:39:43Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1240/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1240/timeline
null
null
true
{ "diff_url": "https://github.com/huggingface/datasets/pull/1240.diff", "html_url": "https://github.com/huggingface/datasets/pull/1240", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1240.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1240" }
true
[ "can you also run `make style` to format the code ?", "I'll come back to this one in sometime :) @lhoestq ", "Also if you would use `xml.etree.ElementTree` to parse the XML it would be awesome, because right now you're using an external dependency `xmltodict `", "> Also if you would use xml.etree.ElementTree to parse the XML it would be awesome, because right now you're using an external dependency xmltodict\r\n\r\nIts pseudo xml so elementtree fails. xmltodict seems to be working quite good for this. do we have examples of pseudo xml datasets?", "for the other pseudo xml the text is parsed manually", "Can you add `xmltodict` to the test dependencies in setup.py please to fix the CI please ?", "Also can you add the dataset card with the tags and run `make style` ?", "Hi :) have you had a chance to fix the test dependency and apply `make style` ?\r\n\r\nFeel fee to ping me when it's ready for a review", "Thanks for your contribution, @abhishekkrthakur. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
https://api.github.com/repos/huggingface/datasets/issues/206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/206/comments
https://api.github.com/repos/huggingface/datasets/issues/206/events
https://github.com/huggingface/datasets/issues/206
625,842,989
MDU6SXNzdWU2MjU4NDI5ODk=
206
[Question] Combine 2 datasets which have the same columns
[]
closed
false
null
2
2020-05-27T16:25:52Z
2020-06-10T09:11:14Z
2020-06-10T09:11:14Z
null
Hi, I am using ``nlp`` to load personal datasets. I created summarization datasets in multi-languages based on wikinews. I have one dataset for english and one for german (french is getting to be ready as well). I want to keep these datasets independent because they need different pre-processing (add different task-specific prefixes for T5 : *summarize:* for english and *zusammenfassen:* for german) My issue is that I want to train T5 on the combined english and german datasets to see if it improves results. So I would like to combine 2 datasets (which have the same columns) to make one and train T5 on it. I was wondering if there is a proper way to do it? I assume that it can be done by combining all examples of each dataset but maybe you have a better solution. Hoping this is clear enough, Thanks a lot 😊 Best
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/206/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/206/timeline
null
completed
null
null
false
[ "We are thinking about ways to combine datasets for T5 in #217, feel free to share your thoughts about this.", "Ok great! I will look at it. Thanks" ]
https://api.github.com/repos/huggingface/datasets/issues/4700
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4700/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4700/comments
https://api.github.com/repos/huggingface/datasets/issues/4700/events
https://github.com/huggingface/datasets/pull/4700
1,307,599,161
PR_kwDODunzps47jKNx
4,700
Support extract lz4 compressed data files
[]
closed
false
null
1
2022-07-18T08:41:31Z
2022-07-18T14:43:59Z
2022-07-18T14:31:47Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4700/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4700/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4700.diff", "html_url": "https://github.com/huggingface/datasets/pull/4700", "merged_at": "2022-07-18T14:31:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/4700.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4700" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/964
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/964/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/964/comments
https://api.github.com/repos/huggingface/datasets/issues/964/events
https://github.com/huggingface/datasets/pull/964
754,474,660
MDExOlB1bGxSZXF1ZXN0NTMwMzY4OTAy
964
Adding the WebNLG dataset
[]
closed
false
null
1
2020-12-01T15:05:23Z
2020-12-02T17:34:05Z
2020-12-02T17:34:05Z
null
This PR adds data from the WebNLG challenge, with one configuration per release and challenge iteration. More information can be found [here](https://webnlg-challenge.loria.fr/) Unfortunately, the data itself comes from a pretty large number of small XML files, so the dummy data ends up being quite large (8.4 MB even keeping only one example per file).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/964/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/964/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/964.diff", "html_url": "https://github.com/huggingface/datasets/pull/964", "merged_at": "2020-12-02T17:34:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/964.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/964" }
true
[ "This is task is part of the GEM suite so will actually need a more complete dataset card. I'm taking a break for now though and will get back to it before merging :) " ]
https://api.github.com/repos/huggingface/datasets/issues/716
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/716/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/716/comments
https://api.github.com/repos/huggingface/datasets/issues/716/events
https://github.com/huggingface/datasets/pull/716
714,952,888
MDExOlB1bGxSZXF1ZXN0NDk3OTQ1ODAw
716
Fixes #712 Attribute error in cell 3 of the overview notebook
[]
closed
false
null
1
2020-10-05T15:42:09Z
2020-10-05T15:46:38Z
2020-10-05T15:46:32Z
null
Fixes the Attribute error in cell 3 of the overview notebook
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/716/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/716/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/716.diff", "html_url": "https://github.com/huggingface/datasets/pull/716", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/716.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/716" }
true
[ "Referencing the wrong issue # in the commit message. Closing this to fix it again." ]
https://api.github.com/repos/huggingface/datasets/issues/1595
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1595/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1595/comments
https://api.github.com/repos/huggingface/datasets/issues/1595/events
https://github.com/huggingface/datasets/pull/1595
770,153,693
MDExOlB1bGxSZXF1ZXN0NTQxOTUwNDk4
1,595
Logiqa en
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
8
2020-12-17T15:42:00Z
2022-10-03T09:38:30Z
2022-10-03T09:38:30Z
null
logiqa in english.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1595/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1595/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1595.diff", "html_url": "https://github.com/huggingface/datasets/pull/1595", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1595.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1595" }
true
[ "I'm getting an error when I try to create the dummy data:\r\n```python\r\naclifton@pop-os:~/data/hf_datasets_sprint/datasets$ python datasets-cli dummy_data ./datasets/logiqa_en/ --auto_generate \r\n2021-01-07 10:50:12.024791: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory\r\n2021-01-07 10:50:12.024814: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\nUsing custom data configuration default\r\nCouldn't generate dummy file 'datasets/dummy/1.1.0/dummy_data/master.zip/LogiQA-dataset-master/README.md'. Ignore that if this file is not useful for dummy data.\r\nDummy data generation done but dummy data test failed since splits ['train', 'test', 'validation'] have 0 examples for config 'default''.\r\nAutomatic dummy data generation failed for some configs of './datasets/logiqa_en/'\r\n```", "Hi ! Sorry for the delay\r\n\r\nTo fix your issue for the dummy data you must increase the number of lines that will be kept to generate the dummy files. By default it's 5, and as you need at least 8 lines here to yield one example you must increase this.\r\n\r\nYou can increase the number of lines to 32 for example by doing\r\n```\r\ndatasets-cli dummy_data ./datasets/logica_en --auto_generate --n_lines 32\r\n```\r\n\r\nAlso it looks like there are changes about other datasets in this PR (imppres). Can you fix that ? You may need to create another branch and another PR.", "To fix the branch issue, I went ahead and made a backup of the dataset then deleted my local copy of my fork of `datasets`. I then followed the [detailed guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md) from the beginning to reclone the fork and start a new branch. \r\n\r\nHowever, when it came time to create the dummy data I got the following error:\r\n```python\r\naclifton@pop-os:~/data/hf_datasets_sprint/datasets$ datasets-cli dummy_data ./datasets/logiqa_en --auto_generate --n_lines 32\r\n2021-02-03 11:23:23.145885: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory\r\n2021-02-03 11:23:23.145914: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\nUsing custom data configuration default\r\nCouldn't generate dummy file 'datasets/logiqa_en/dummy/1.1.0/dummy_data/master.zip/LogiQA-dataset-master/README.md'. Ignore that if this file is not useful for dummy data.\r\nTraceback (most recent call last):\r\n File \"/home/aclifton/anaconda3/bin/datasets-cli\", line 36, in <module>\r\n service.run()\r\n File \"/home/aclifton/anaconda3/lib/python3.7/site-packages/datasets/commands/dummy_data.py\", line 317, in run\r\n keep_uncompressed=self._keep_uncompressed,\r\n File \"/home/aclifton/anaconda3/lib/python3.7/site-packages/datasets/commands/dummy_data.py\", line 355, in _autogenerate_dummy_data\r\n dataset_builder._prepare_split(split_generator)\r\n File \"/home/aclifton/anaconda3/lib/python3.7/site-packages/datasets/builder.py\", line 905, in _prepare_split\r\n example = self.info.features.encode_example(record)\r\n File \"/home/aclifton/anaconda3/lib/python3.7/site-packages/datasets/features.py\", line 799, in encode_example\r\n return encode_nested_example(self, example)\r\n File \"/home/aclifton/anaconda3/lib/python3.7/site-packages/datasets/features.py\", line 710, in encode_nested_example\r\n (k, encode_nested_example(sub_schema, sub_obj)) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)\r\n File \"/home/aclifton/anaconda3/lib/python3.7/site-packages/datasets/features.py\", line 710, in <genexpr>\r\n (k, encode_nested_example(sub_schema, sub_obj)) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)\r\n File \"/home/aclifton/anaconda3/lib/python3.7/site-packages/datasets/features.py\", line 737, in encode_nested_example\r\n return schema.encode_example(obj)\r\n File \"/home/aclifton/anaconda3/lib/python3.7/site-packages/datasets/features.py\", line 522, in encode_example\r\n example_data = self.str2int(example_data)\r\n File \"/home/aclifton/anaconda3/lib/python3.7/site-packages/datasets/features.py\", line 481, in str2int\r\n output.append(self._str2int[str(value)])\r\nKeyError: \"Some Cantonese don't like chili, so some southerners don't like chili.\"\r\n```", "Hi ! The error happens when the script is verifying that the generated dummy data work fine with the dataset script.\r\nApparently it fails because the text `\"Some Cantonese don't like chili, so some southerners don't like chili.\"` was given in a field that is a ClassLabel feature (probably the `answer` field), while it actually expects \"a\", \"b\", \"c\" or \"d\". Can you fix the script so that it returns the expected labels for this field instead of the text ?\r\n\r\nAlso it would be awesome to rename this field `answerKey` instead of `answer` to have the same column names as the other multiple-choice-QA datasets in the library :) ", "Ok getting closer! I got the dummy data to work. However I am now getting the following error:\r\n```python\r\naclifton@pop-os:~/data/hf_datasets_sprint/datasets$ RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_logiqa_en\r\n===================================================================== test session starts ======================================================================\r\nplatform linux -- Python 3.7.6, pytest-5.3.5, py-1.8.1, pluggy-0.13.1\r\nrootdir: /home/aclifton/data/hf_datasets_sprint/datasets\r\nplugins: astropy-header-0.1.2, xdist-2.1.0, doctestplus-0.5.0, forked-1.3.0, hypothesis-5.5.4, arraydiff-0.3, remotedata-0.3.2, openfiles-0.4.0\r\ncollected 0 items / 1 error \r\n\r\n============================================================================ ERRORS ============================================================================\r\n________________________________________________________ ERROR collecting tests/test_dataset_common.py _________________________________________________________\r\nImportError while importing test module '/home/aclifton/data/hf_datasets_sprint/datasets/tests/test_dataset_common.py'.\r\nHint: make sure your test modules/packages have valid Python names.\r\nTraceback:\r\ntests/test_dataset_common.py:42: in <module>\r\n from datasets.packaged_modules import _PACKAGED_DATASETS_MODULES\r\nE ModuleNotFoundError: No module named 'datasets.packaged_modules'\r\n----------------------------------------------------------------------- Captured stderr ------------------------------------------------------------------------\r\n2021-02-10 11:06:14.345510: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory\r\n2021-02-10 11:06:14.345551: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n======================================================================= warnings summary =======================================================================\r\n/home/aclifton/anaconda3/lib/python3.7/site-packages/tensorflow/python/autograph/utils/testing.py:21\r\n /home/aclifton/anaconda3/lib/python3.7/site-packages/tensorflow/python/autograph/utils/testing.py:21: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n\r\n/home/aclifton/anaconda3/lib/python3.7/site-packages/apache_beam/typehints/typehints.py:693\r\n /home/aclifton/anaconda3/lib/python3.7/site-packages/apache_beam/typehints/typehints.py:693: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working\r\n if not isinstance(type_params, collections.Iterable):\r\n\r\n/home/aclifton/anaconda3/lib/python3.7/site-packages/apache_beam/typehints/typehints.py:532\r\n /home/aclifton/anaconda3/lib/python3.7/site-packages/apache_beam/typehints/typehints.py:532: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working\r\n if not isinstance(type_params, (collections.Sequence, set)):\r\n\r\n/home/aclifton/anaconda3/lib/python3.7/site-packages/elasticsearch/compat.py:38\r\n /home/aclifton/anaconda3/lib/python3.7/site-packages/elasticsearch/compat.py:38: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working\r\n from collections import Mapping\r\n\r\n-- Docs: https://docs.pytest.org/en/latest/warnings.html\r\n================================================================= 4 warnings, 1 error in 2.74s =================================================================\r\nERROR: not found: /home/aclifton/data/hf_datasets_sprint/datasets/tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_logiqa_en\r\n(no name '/home/aclifton/data/hf_datasets_sprint/datasets/tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_logiqa_en' in any of [<Module test_dataset_common.py>])\r\n\r\n```", "Hi ! It looks like the version of `datasets` that is installed in your environment doesn't match the version of `datasets` you're using for the tests. Can you try uninstalling datasets and reinstall it again ?\r\n```\r\npip uninstall datasets -y\r\npip install -e .\r\n```", "Closer still!\r\n```python\r\naclifton@pop-os:~/data/hf_datasets_sprint/datasets$ git commit\r\n[logiqa_en 2664fe7f] fixed several issues with logiqa_en.\r\n 4 files changed, 324 insertions(+)\r\n create mode 100644 datasets/logiqa_en/README.md\r\n create mode 100644 datasets/logiqa_en/dataset_infos.json\r\n create mode 100644 datasets/logiqa_en/dummy/1.1.0/dummy_data.zip\r\n create mode 100644 datasets/logiqa_en/logiqa_en.py\r\naclifton@pop-os:~/data/hf_datasets_sprint/datasets$ git fetch upstream\r\nremote: Enumerating objects: 1, done.\r\nremote: Counting objects: 100% (1/1), done.\r\nremote: Total 1 (delta 0), reused 0 (delta 0), pack-reused 0\r\nUnpacking objects: 100% (1/1), 590 bytes | 590.00 KiB/s, done.\r\nFrom https://github.com/huggingface/datasets\r\n 6e114a0c..318b09eb master -> upstream/master\r\naclifton@pop-os:~/data/hf_datasets_sprint/datasets$ git rebase upstream/master \r\nerror: cannot rebase: You have unstaged changes.\r\nerror: Please commit or stash them.\r\naclifton@pop-os:~/data/hf_datasets_sprint/datasets$ git push -u origin logiqa_en\r\nUsername for 'https://github.com': aclifton314\r\nPassword for 'https://aclifton314@github.com': \r\nTo https://github.com/aclifton314/datasets\r\n ! [rejected] logiqa_en -> logiqa_en (non-fast-forward)\r\nerror: failed to push some refs to 'https://github.com/aclifton314/datasets'\r\nhint: Updates were rejected because the tip of your current branch is behind\r\nhint: its remote counterpart. Integrate the remote changes (e.g.\r\nhint: 'git pull ...') before pushing again.\r\nhint: See the 'Note about fast-forwards' in 'git push --help' for details.\r\n```", "Thanks for your contribution, @aclifton314. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
https://api.github.com/repos/huggingface/datasets/issues/1493
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1493/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1493/comments
https://api.github.com/repos/huggingface/datasets/issues/1493/events
https://github.com/huggingface/datasets/pull/1493
762,979,415
MDExOlB1bGxSZXF1ZXN0NTM3NDc0MDc1
1,493
Added RONEC dataset.
[]
closed
false
null
4
2020-12-11T22:14:50Z
2020-12-21T14:48:56Z
2020-12-21T14:48:56Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1493/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1493/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1493.diff", "html_url": "https://github.com/huggingface/datasets/pull/1493", "merged_at": "2020-12-21T14:48:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/1493.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1493" }
true
[ "Thanks for the PR @iliemihai . \r\n\r\nFew comments - \r\n\r\nCan you run - \r\n`python datasets-cli dummy_data ./datasets/ronec --auto_generate` to generate dummy data.\r\n\r\nAlso, before committing files run : \r\n`make style`\r\n`flake8 datasets`\r\nthen you can add and commit files.", "> Thanks for the PR @iliemihai .\r\n> \r\n> Few comments -\r\n> \r\n> Can you run -\r\n> `python datasets-cli dummy_data ./datasets/ronec --auto_generate` to generate dummy data.\r\n> \r\n> Also, before committing files run :\r\n> `make style`\r\n> `flake8 datasets`\r\n> then you can add and commit files.\r\n\r\nSorry, forgot to generate dummy data. I will do it now :D", "Awesome, good job @iliemihai !\r\nI think the PR is ready to merge.\r\n@lhoestq would you mind double-checking this ?", "Had to regenerate the dummy data since I just found out they were empty files" ]
https://api.github.com/repos/huggingface/datasets/issues/5686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5686/comments
https://api.github.com/repos/huggingface/datasets/issues/5686/events
https://github.com/huggingface/datasets/pull/5686
1,646,308,228
PR_kwDODunzps5NMXdu
5,686
set dev version
[]
closed
false
null
3
2023-03-29T18:24:13Z
2023-03-29T18:33:49Z
2023-03-29T18:24:22Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5686/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5686/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5686.diff", "html_url": "https://github.com/huggingface/datasets/pull/5686", "merged_at": "2023-03-29T18:24:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/5686.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5686" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5686). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008460 / 0.011353 (-0.002893) | 0.006114 / 0.011008 (-0.004894) | 0.121496 / 0.038508 (0.082987) | 0.035030 / 0.023109 (0.011920) | 0.397778 / 0.275898 (0.121880) | 0.429020 / 0.323480 (0.105540) | 0.007811 / 0.007986 (-0.000174) | 0.006269 / 0.004328 (0.001940) | 0.098895 / 0.004250 (0.094645) | 0.045407 / 0.037052 (0.008355) | 0.413679 / 0.258489 (0.155189) | 0.437491 / 0.293841 (0.143650) | 0.053207 / 0.128546 (-0.075339) | 0.018471 / 0.075646 (-0.057175) | 0.414800 / 0.419271 (-0.004472) | 0.060864 / 0.043533 (0.017332) | 0.398501 / 0.255139 (0.143362) | 0.421142 / 0.283200 (0.137942) | 0.114908 / 0.141683 (-0.026775) | 1.678630 / 1.452155 (0.226475) | 1.782313 / 1.492716 (0.289596) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280783 / 0.018006 (0.262777) | 0.591573 / 0.000490 (0.591083) | 0.005797 / 0.000200 (0.005597) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030431 / 0.037411 (-0.006981) | 0.117342 / 0.014526 (0.102816) | 0.128456 / 0.176557 (-0.048101) | 0.198782 / 0.737135 (-0.538354) | 0.128501 / 0.296338 (-0.167838) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.603073 / 0.215209 (0.387864) | 6.101354 / 2.077655 (4.023699) | 2.527812 / 1.504120 (1.023692) | 2.101468 / 1.541195 (0.560273) | 2.092813 / 1.468490 (0.624323) | 1.182150 / 4.584777 (-3.402627) | 5.389278 / 3.745712 (1.643566) | 5.041001 / 5.269862 (-0.228860) | 2.650581 / 4.565676 (-1.915095) | 0.138761 / 0.424275 (-0.285514) | 0.014209 / 0.007607 (0.006602) | 0.748596 / 0.226044 (0.522552) | 7.373937 / 2.268929 (5.105008) | 3.245882 / 55.444624 (-52.198742) | 2.523569 / 6.876477 (-4.352908) | 2.581343 / 2.142072 (0.439270) | 1.340436 / 4.805227 (-3.464791) | 0.241388 / 6.500664 (-6.259276) | 0.076634 / 0.075469 (0.001164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.480237 / 1.841788 (-0.361551) | 16.781338 / 8.074308 (8.707030) | 19.735028 / 10.191392 (9.543636) | 0.256872 / 0.680424 (-0.423551) | 0.029211 / 0.534201 (-0.504990) | 0.503292 / 0.579283 (-0.075991) | 0.584510 / 0.434364 (0.150146) | 0.580293 / 0.540337 (0.039955) | 0.678863 / 1.386936 (-0.708073) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009972 / 0.011353 (-0.001381) | 0.006107 / 0.011008 (-0.004902) | 0.096188 / 0.038508 (0.057680) | 0.033320 / 0.023109 (0.010210) | 0.420789 / 0.275898 (0.144891) | 0.460488 / 0.323480 (0.137008) | 0.006492 / 0.007986 (-0.001493) | 0.005325 / 0.004328 (0.000997) | 0.094974 / 0.004250 (0.090723) | 0.047708 / 0.037052 (0.010655) | 0.426689 / 0.258489 (0.168200) | 0.476440 / 0.293841 (0.182599) | 0.052776 / 0.128546 (-0.075770) | 0.018779 / 0.075646 (-0.056868) | 0.119598 / 0.419271 (-0.299673) | 0.061800 / 0.043533 (0.018267) | 0.421305 / 0.255139 (0.166166) | 0.441125 / 0.283200 (0.157925) | 0.114221 / 0.141683 (-0.027462) | 1.712681 / 1.452155 (0.260526) | 1.852316 / 1.492716 (0.359600) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.272412 / 0.018006 (0.254405) | 0.583996 / 0.000490 (0.583506) | 0.000505 / 0.000200 (0.000305) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029553 / 0.037411 (-0.007858) | 0.124921 / 0.014526 (0.110395) | 0.133338 / 0.176557 (-0.043218) | 0.193811 / 0.737135 (-0.543325) | 0.147973 / 0.296338 (-0.148365) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.595241 / 0.215209 (0.380032) | 6.012015 / 2.077655 (3.934360) | 2.611295 / 1.504120 (1.107175) | 2.290127 / 1.541195 (0.748932) | 2.300366 / 1.468490 (0.831876) | 1.197602 / 4.584777 (-3.387175) | 5.439064 / 3.745712 (1.693352) | 2.906088 / 5.269862 (-2.363773) | 1.919183 / 4.565676 (-2.646493) | 0.132166 / 0.424275 (-0.292109) | 0.014544 / 0.007607 (0.006937) | 0.726377 / 0.226044 (0.500333) | 7.361023 / 2.268929 (5.092094) | 3.289266 / 55.444624 (-52.155358) | 2.635570 / 6.876477 (-4.240907) | 2.595691 / 2.142072 (0.453619) | 1.329458 / 4.805227 (-3.475769) | 0.239419 / 6.500664 (-6.261245) | 0.076316 / 0.075469 (0.000847) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.547616 / 1.841788 (-0.294172) | 17.374315 / 8.074308 (9.300007) | 20.216275 / 10.191392 (10.024883) | 0.252102 / 0.680424 (-0.428322) | 0.027535 / 0.534201 (-0.506665) | 0.524618 / 0.579283 (-0.054666) | 0.596803 / 0.434364 (0.162439) | 0.652632 / 0.540337 (0.112294) | 0.762272 / 1.386936 (-0.624664) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8c7d4b2f981f8cf639dcbd80f40a41aa5b1693c6 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008236 / 0.011353 (-0.003117) | 0.006186 / 0.011008 (-0.004822) | 0.117852 / 0.038508 (0.079344) | 0.034711 / 0.023109 (0.011602) | 0.447564 / 0.275898 (0.171666) | 0.438727 / 0.323480 (0.115247) | 0.006576 / 0.007986 (-0.001410) | 0.005903 / 0.004328 (0.001574) | 0.094309 / 0.004250 (0.090059) | 0.042760 / 0.037052 (0.005708) | 0.393269 / 0.258489 (0.134780) | 0.438061 / 0.293841 (0.144220) | 0.059029 / 0.128546 (-0.069517) | 0.020296 / 0.075646 (-0.055350) | 0.412057 / 0.419271 (-0.007215) | 0.059808 / 0.043533 (0.016275) | 0.407243 / 0.255139 (0.152104) | 0.414290 / 0.283200 (0.131090) | 0.107701 / 0.141683 (-0.033981) | 1.671522 / 1.452155 (0.219367) | 1.775055 / 1.492716 (0.282338) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275242 / 0.018006 (0.257236) | 0.599698 / 0.000490 (0.599208) | 0.001289 / 0.000200 (0.001089) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029579 / 0.037411 (-0.007832) | 0.127249 / 0.014526 (0.112723) | 0.137431 / 0.176557 (-0.039126) | 0.220330 / 0.737135 (-0.516805) | 0.133540 / 0.296338 (-0.162798) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.571989 / 0.215209 (0.356780) | 5.931503 / 2.077655 (3.853848) | 2.526646 / 1.504120 (1.022527) | 2.189476 / 1.541195 (0.648281) | 2.151935 / 1.468490 (0.683444) | 1.242440 / 4.584777 (-3.342337) | 5.599675 / 3.745712 (1.853963) | 3.242035 / 5.269862 (-2.027826) | 2.368361 / 4.565676 (-2.197315) | 0.145659 / 0.424275 (-0.278616) | 0.013813 / 0.007607 (0.006206) | 0.782495 / 0.226044 (0.556451) | 7.861619 / 2.268929 (5.592690) | 3.241001 / 55.444624 (-52.203623) | 2.611025 / 6.876477 (-4.265452) | 2.667263 / 2.142072 (0.525191) | 1.429992 / 4.805227 (-3.375235) | 0.243008 / 6.500664 (-6.257656) | 0.083686 / 0.075469 (0.008217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.565526 / 1.841788 (-0.276262) | 18.260815 / 8.074308 (10.186507) | 22.586133 / 10.191392 (12.394741) | 0.231864 / 0.680424 (-0.448559) | 0.030877 / 0.534201 (-0.503324) | 0.569726 / 0.579283 (-0.009557) | 0.678638 / 0.434364 (0.244274) | 0.611810 / 0.540337 (0.071472) | 0.718771 / 1.386936 (-0.668165) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009398 / 0.011353 (-0.001955) | 0.006452 / 0.011008 (-0.004556) | 0.103352 / 0.038508 (0.064844) | 0.034773 / 0.023109 (0.011664) | 0.523782 / 0.275898 (0.247884) | 0.523554 / 0.323480 (0.200074) | 0.006990 / 0.007986 (-0.000996) | 0.004994 / 0.004328 (0.000666) | 0.102199 / 0.004250 (0.097949) | 0.050087 / 0.037052 (0.013035) | 0.496662 / 0.258489 (0.238173) | 0.563130 / 0.293841 (0.269289) | 0.052851 / 0.128546 (-0.075695) | 0.019824 / 0.075646 (-0.055822) | 0.122657 / 0.419271 (-0.296614) | 0.057714 / 0.043533 (0.014181) | 0.470502 / 0.255139 (0.215363) | 0.518908 / 0.283200 (0.235708) | 0.114374 / 0.141683 (-0.027309) | 1.795918 / 1.452155 (0.343763) | 1.957461 / 1.492716 (0.464744) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.303921 / 0.018006 (0.285915) | 0.584406 / 0.000490 (0.583916) | 0.000444 / 0.000200 (0.000244) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032254 / 0.037411 (-0.005158) | 0.129966 / 0.014526 (0.115440) | 0.151000 / 0.176557 (-0.025557) | 0.234060 / 0.737135 (-0.503076) | 0.149444 / 0.296338 (-0.146895) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.666627 / 0.215209 (0.451418) | 7.054701 / 2.077655 (4.977046) | 2.836895 / 1.504120 (1.332775) | 2.561994 / 1.541195 (1.020799) | 2.672460 / 1.468490 (1.203970) | 1.411929 / 4.584777 (-3.172848) | 6.026918 / 3.745712 (2.281206) | 3.341745 / 5.269862 (-1.928116) | 2.280317 / 4.565676 (-2.285359) | 0.156635 / 0.424275 (-0.267641) | 0.014256 / 0.007607 (0.006649) | 0.804830 / 0.226044 (0.578786) | 8.106960 / 2.268929 (5.838031) | 3.597452 / 55.444624 (-51.847172) | 3.002847 / 6.876477 (-3.873630) | 2.931160 / 2.142072 (0.789088) | 1.484172 / 4.805227 (-3.321056) | 0.254166 / 6.500664 (-6.246498) | 0.080554 / 0.075469 (0.005085) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.809909 / 1.841788 (-0.031879) | 18.988994 / 8.074308 (10.914686) | 23.153442 / 10.191392 (12.962050) | 0.250554 / 0.680424 (-0.429870) | 0.048677 / 0.534201 (-0.485524) | 0.574109 / 0.579283 (-0.005174) | 0.640917 / 0.434364 (0.206553) | 0.725215 / 0.540337 (0.184878) | 0.878234 / 1.386936 (-0.508702) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e3667d6e17d68503469c8e88ec344b7cccfa2346 \"CML watermark\")\n" ]
https://api.github.com/repos/huggingface/datasets/issues/548
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/548/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/548/comments
https://api.github.com/repos/huggingface/datasets/issues/548/events
https://github.com/huggingface/datasets/pull/548
689,285,996
MDExOlB1bGxSZXF1ZXN0NDc2MzYzMjU1
548
[Breaking] Switch text loading to multi-threaded PyArrow loading
[]
closed
false
null
5
2020-08-31T15:15:41Z
2020-09-08T10:19:58Z
2020-09-08T10:19:57Z
null
Test if we can get better performances for large-scale text datasets by using multi-threaded text file loading based on Apache Arrow multi-threaded CSV loader. If it works ok, it would fix #546. **Breaking change**: The text lines now do not include final line-breaks anymore.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/548/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/548/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/548.diff", "html_url": "https://github.com/huggingface/datasets/pull/548", "merged_at": "2020-09-08T10:19:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/548.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/548" }
true
[ "Awesome !\r\nAlso I was wondering if we should try to make the hashing of the `data_files` faster (it is used to build the cache directory of datasets like `text` or `json`). Right now it reads each file and hashes all of its data. We could simply hash the path and some metadata including the `time last modified` tag no ? Apparently we can get this tag with `os.path.getmtime(path)`", "I just rebased from master to include the hashing changes from #573 ", "I think this is ready to merge, no?", "Indeed it's ready to merge :)", "Ok added the breaking change info and we can merge indeed.\r\n" ]
https://api.github.com/repos/huggingface/datasets/issues/4088
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4088/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4088/comments
https://api.github.com/repos/huggingface/datasets/issues/4088/events
https://github.com/huggingface/datasets/pull/4088
1,191,901,172
PR_kwDODunzps41l4yE
4,088
Remove unused legacy Beam utils
[]
closed
false
null
1
2022-04-04T14:43:51Z
2022-04-05T15:23:27Z
2022-04-05T15:17:41Z
null
This PR removes unused legacy custom `WriteToParquet`, once official Apache Beam includes the patch since version 2.22.0: - Patch PR: https://github.com/apache/beam/pull/11699 - Issue: https://issues.apache.org/jira/browse/BEAM-10022 In relation with: - #204
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4088/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4088/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4088.diff", "html_url": "https://github.com/huggingface/datasets/pull/4088", "merged_at": "2022-04-05T15:17:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/4088.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4088" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4009
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4009/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4009/comments
https://api.github.com/repos/huggingface/datasets/issues/4009/events
https://github.com/huggingface/datasets/issues/4009
1,179,658,611
I_kwDODunzps5GUClz
4,009
AMI load_dataset error: sndfile library not found
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
1
2022-03-24T15:13:38Z
2022-03-24T15:46:38Z
2022-03-24T15:17:29Z
null
## Describe the bug Getting error message when loading AMI dataset. ## Steps to reproduce the bug `python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])" ` ## Expected results A clear and concise description of the expected results. ## Actual results Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py", line 1707, in load_dataset use_auth_token=use_auth_token, File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 595, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 690, in _download_and_prepare ) from None OSError: Cannot find data file. Original error: sndfile library not found ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11 - Python version: 3.7.3 - PyArrow version: 7.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4009/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4009/timeline
null
completed
null
null
false
[ "Issue unresolved, see [4000](https://github.com/huggingface/datasets/issues/4009#issue-1179658611)" ]
https://api.github.com/repos/huggingface/datasets/issues/994
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/994/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/994/comments
https://api.github.com/repos/huggingface/datasets/issues/994/events
https://github.com/huggingface/datasets/pull/994
755,146,834
MDExOlB1bGxSZXF1ZXN0NTMwOTE1MDc2
994
Add Sepedi ner corpus
[]
closed
false
null
2
2020-12-02T10:30:07Z
2020-12-03T10:19:14Z
2020-12-02T18:20:08Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/994/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/994/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/994.diff", "html_url": "https://github.com/huggingface/datasets/pull/994", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/994.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/994" }
true
[ "Looks like the PR includes commits about many other files.\r\nCould you create a clean branch from master, and create another PR ?", "Sorry, will do that. " ]
https://api.github.com/repos/huggingface/datasets/issues/1158
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1158/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1158/comments
https://api.github.com/repos/huggingface/datasets/issues/1158/events
https://github.com/huggingface/datasets/pull/1158
757,658,926
MDExOlB1bGxSZXF1ZXN0NTMzMDAxMjM0
1,158
Add BBC Hindi NLI Dataset
[]
closed
false
null
7
2020-12-05T11:25:34Z
2021-02-05T09:48:31Z
2021-02-05T09:48:31Z
null
# Dataset Card for BBC Hindi NLI Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - HomePage : https://github.com/midas-research/hindi-nli-data - Paper : "https://www.aclweb.org/anthology/2020.aacl-main.71" - Point of Contact : https://github.com/midas-research/hindi-nli-data ### Dataset Summary - Dataset for Natural Language Inference in Hindi Language. BBC Hindi Dataset consists of textual-entailment pairs. - Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic. - Context and Hypothesis is written in Hindi while Entailment_Label is in English. - Entailment_label is of 2 types - entailed and not-entailed. - Dataset can be used to train models for Natural Language Inference tasks in Hindi Language. [More Information Needed] ### Supported Tasks and Leaderboards - Natural Language Inference for Hindi ### Languages Dataset is in Hindi ## Dataset Structure - Data is structured in TSV format. - Train and Test files are in seperate files ### Dataset Instances An example of 'train' looks as follows. ``` {'hypothesis': 'यह खबर की सूचना है|', 'label': 'entailed', 'premise': 'गोपनीयता की नीति', 'topic': '1'} ``` ### Data Fields - Each row contatins 4 columns - Premise, Hypothesis, Label and Topic. ### Data Splits - Train : 15553 - Valid : 2581 - Test : 2593 ## Dataset Creation - We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available BBC Hindi news text classification datasets in Hindi and pose them as TE problems - In this recasting process, we build template hypotheses for each class in the label taxonomy - Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples. - For more information on the recasting process, refer to paper "https://www.aclweb.org/anthology/2020.aacl-main.71" ### Source Data Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(https://github.com/NirantK/hindi2vec/releases/tag/bbc-hindi-v0.1) #### Initial Data Collection and Normalization - BBC Hindi News Classification Dataset contains 4, 335 Hindi news headlines tagged across 14 categories: India, Pakistan,news, International, entertainment, sport, science, China, learning english, social, southasia, business, institutional, multimedia - We processed this dataset to combine two sets of relevant but low prevalence classes. - Namely, we merged the samples from Pakistan, China, international, and southasia as one class called international. - Likewise, we also merged samples from news, business, social, learning english, and institutional as news. - Lastly, we also removed the class multimedia because there were very few samples. #### Who are the source language producers? Pls refer to this paper: "https://www.aclweb.org/anthology/2020.aacl-main.71" ### Annotations #### Annotation process Annotation process has been described in Dataset Creation Section. #### Who are the annotators? Annotation is done automatically. ### Personal and Sensitive Information No Personal and Sensitive Information is mentioned in the Datasets. ## Considerations for Using the Data Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Discussion of Biases Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Other Known Limitations No other known limitations ## Additional Information Pls refer to this link: https://github.com/midas-research/hindi-nli-data ### Dataset Curators It is written in the repo : https://github.com/avinsit123/hindi-nli-data that - This corpus can be used freely for research purposes. - The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper. - If interested in commercial use of the corpus, send email to midas@iiitd.ac.in. - If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus. - Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications. - Rather than redistributing the corpus, please direct interested parties to this page - Please feel free to send us an email: - with feedback regarding the corpus. - with information on how you have used the corpus. - if interested in having us analyze your data for natural language inference. - if interested in a collaborative research project. ### Licensing Information Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi). Pls contact authors for any information on the dataset. ### Citation Information ``` @inproceedings{uppal-etal-2020-two, title = "Two-Step Classification using Recasted Data for Low Resource Settings", author = "Uppal, Shagun and Gupta, Vivek and Swaminathan, Avinash and Zhang, Haimin and Mahata, Debanjan and Gosangi, Rakesh and Shah, Rajiv Ratn and Stent, Amanda", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.aacl-main.71", pages = "706--719", abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.", } ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1158/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1158/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1158.diff", "html_url": "https://github.com/huggingface/datasets/pull/1158", "merged_at": "2021-02-05T09:48:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/1158.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1158" }
true
[ "Hi @avinsit123 !\r\nDid you manage to rename the dataset and apply the suggestion I mentioned for the data fields ?\r\nFeel free to ping me when you're ready for a review :) ", "Hi @avinsit123 ! Have you had a chance to take a look at my suggestions ?\r\nLet me know if you have questions or if I can help", "@lhoestq sorry I completely forgot about this pr. I will complete it ASAP.", "@lhoestq I have fixed the code to resolve all your comments. Pls do check. I also don't seem to know why the CI tests are failing as I ran all the tests in CONTRIBUTING.md on my local pc and they passed.", "@lhoestq thanks for ur patient review :) . I also wish to add similar 3 more NLI hindi datasets. Hope to do within this week.", "@lhoestq would this be merged to master?", "Yes of course ;)\r\nmerging now !" ]
https://api.github.com/repos/huggingface/datasets/issues/1683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1683/comments
https://api.github.com/repos/huggingface/datasets/issues/1683/events
https://github.com/huggingface/datasets/issues/1683
778,287,612
MDU6SXNzdWU3NzgyODc2MTI=
1,683
`ArrowInvalid` occurs while running `Dataset.map()` function for DPRContext
[]
closed
false
null
2
2021-01-04T18:47:53Z
2021-01-04T19:04:45Z
2021-01-04T19:04:45Z
null
It seems to fail the final batch ): steps to reproduce: ``` from datasets import load_dataset from elasticsearch import Elasticsearch import torch from transformers import file_utils, set_seed from transformers import DPRContextEncoder, DPRContextEncoderTokenizerFast MAX_SEQ_LENGTH = 256 ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base", cache_dir="../datasets/") ctx_tokenizer = DPRContextEncoderTokenizerFast.from_pretrained( "facebook/dpr-ctx_encoder-single-nq-base", cache_dir="..datasets/" ) dataset = load_dataset('text', data_files='data/raw/ARC_Corpus.txt', cache_dir='../datasets') torch.set_grad_enabled(False) ds_with_embeddings = dataset.map( lambda example: { 'embeddings': ctx_encoder( **ctx_tokenizer( example["text"], padding='max_length', truncation=True, max_length=MAX_SEQ_LENGTH, return_tensors="pt" ) )[0][0].numpy(), }, batched=True, load_from_cache_file=False, batch_size=1000 ) ``` ARC Corpus can be obtained from [here](https://ai2-datasets.s3-us-west-2.amazonaws.com/arc/ARC-V1-Feb2018.zip) And then the error: ``` --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) <ipython-input-13-67d139bb2ed3> in <module> 14 batched=True, 15 load_from_cache_file=False, ---> 16 batch_size=1000 17 ) ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc) 301 num_proc=num_proc, 302 ) --> 303 for k, dataset in self.items() 304 } 305 ) ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0) 301 num_proc=num_proc, 302 ) --> 303 for k, dataset in self.items() 304 } 305 ) ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1257 fn_kwargs=fn_kwargs, 1258 new_fingerprint=new_fingerprint, -> 1259 update_data=update_data, 1260 ) 1261 else: ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 155 } 156 # apply actual function --> 157 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 158 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 159 # re-apply format to the output ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 161 # Call actual function 162 --> 163 out = func(self, *args, **kwargs) 164 165 # Update fingerprint of in-place transforms + update in-place history of transforms ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data) 1526 if update_data: 1527 batch = cast_to_python_objects(batch) -> 1528 writer.write_batch(batch) 1529 if update_data: 1530 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 276 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type) 277 typed_sequence_examples[col] = typed_sequence --> 278 pa_table = pa.Table.from_pydict(typed_sequence_examples) 279 self.write_table(pa_table) 280 ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict() ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_arrays() ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.validate() ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Column 1 named text expected length 768 but got length 1000 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1683/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1683/timeline
null
completed
null
null
false
[ "Looks like the mapping function returns a dictionary with a 768-dim array in the `embeddings` field. Since the map is batched, we actually expect the `embeddings` field to be an array of shape (batch_size, 768) to have one embedding per example in the batch.\r\n\r\nTo fix that can you try to remove one of the `[0]` ? In my opinion you only need one of them, not two.", "It makes sense :D\r\n\r\nIt seems to work! Thanks a lot :))\r\n\r\nClosing the issue" ]
https://api.github.com/repos/huggingface/datasets/issues/970
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/970/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/970/comments
https://api.github.com/repos/huggingface/datasets/issues/970/events
https://github.com/huggingface/datasets/pull/970
754,697,489
MDExOlB1bGxSZXF1ZXN0NTMwNTUxNTkz
970
Add SWAG
[]
closed
false
null
0
2020-12-01T20:21:05Z
2020-12-02T09:55:16Z
2020-12-02T09:55:15Z
null
Commonsense NLI -> https://rowanzellers.com/swag/
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/970/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/970/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/970.diff", "html_url": "https://github.com/huggingface/datasets/pull/970", "merged_at": "2020-12-02T09:55:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/970.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/970" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4769
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4769/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4769/comments
https://api.github.com/repos/huggingface/datasets/issues/4769/events
https://github.com/huggingface/datasets/issues/4769
1,322,121,554
I_kwDODunzps5OzflS
4,769
Fail to process SQuADv1.1 datasets with max_seq_length=128, doc_stride=96.
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
0
2022-07-29T11:18:24Z
2022-07-29T11:18:24Z
null
null
## Describe the bug datasets fail to process SQuADv1.1 with max_seq_length=128, doc_stride=96 when calling datasets["train"].train_dataset.map(). ## Steps to reproduce the bug I used huggingface[ TF2 question-answering examples](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering). And my scripts are as follows: ``` python run_qa.py \ --model_name_or_path $BERT_DIR \ --dataset_name $SQUAD_DIR \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 128 \ --doc_stride 96 \ --output_dir $OUTPUT \ --save_steps 10000 \ --overwrite_cache \ --overwrite_output_dir \ ``` ## Expected results Normally process SQuADv1.1 datasets with max_seq_length=128, doc_stride=96. ## Actual results ``` INFO:__main__:Padding all batches to max length because argument was set or we're on TPU. WARNING:datasets.fingerprint:Parameter 'function'=<function main.<locals>.prepare_train_features at 0x7f15bc2d07a0> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed. 0%| | 0/88 [00:00<?, ?ba/s]thread '<unnamed>' panicked at 'assertion failed: stride < max_len', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/encoding.rs:311:9 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace 0%| | 0/88 [00:00<?, ?ba/s] Traceback (most recent call last): File "run_qa.py", line 743, in <module> main() File "run_qa.py", line 485, in main load_from_cache_file=not data_args.overwrite_cache, File "/anaconda3/envs/py37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2394, in map desc=desc, File "/anaconda3/envs/py37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 551, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/anaconda3/envs/py37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 518, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/anaconda3/envs/py37/lib/python3.7/site-packages/datasets/fingerprint.py", line 458, in wrapper out = func(self, *args, **kwargs) File "anaconda3/envs/py37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2768, in _map_single offset=offset, File "anaconda3/envs/py37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2644, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "anaconda3/envs/py37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2336, in decorated result = f(decorated_item, *args, **kwargs) File "run_qa.py", line 410, in prepare_train_features padding=padding, File "anaconda3/envs/py37/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2512, in __call__ **kwargs, File "anaconda3/envs/py37/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2703, in batch_encode_plus **kwargs, File "anaconda3/envs/py37/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 429, in _batch_encode_plus is_pretokenized=is_split_into_words, pyo3_runtime.PanicException: assertion failed: stride < max_len Traceback (most recent call last): File "./data/SQuADv1.1/evaluate-v1.1.py", line 92, in <module> with open(args.prediction_file) as prediction_file: FileNotFoundError: [Errno 2] No such file or directory: './output/bert_base_squadv1.1_tf2/eval_predictions.json' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Ubuntu, pytorch=1.11.0, tensorflow-gpu=2.9.1 - Python version: 2.7 - PyArrow version: 8.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4769/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4769/timeline
null
null
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/74
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/74/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/74/comments
https://api.github.com/repos/huggingface/datasets/issues/74/events
https://github.com/huggingface/datasets/pull/74
616,511,101
MDExOlB1bGxSZXF1ZXN0NDE2NjA3MDcy
74
fix overflow check
[]
closed
false
null
0
2020-05-12T09:38:01Z
2020-05-12T10:04:39Z
2020-05-12T10:04:38Z
null
I did some tests and unfortunately the test ``` pa_array.nbytes > MAX_BATCH_BYTES ``` doesn't work. Indeed for a StructArray, `nbytes` can be less 2GB even if there is an overflow (it loops...). I don't think we can do a proper overflow test for the limit of 2GB... For now I replaced it with a sanity check on the first element.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/74/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/74/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/74.diff", "html_url": "https://github.com/huggingface/datasets/pull/74", "merged_at": "2020-05-12T10:04:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/74.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/74" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/325
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/325/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/325/comments
https://api.github.com/repos/huggingface/datasets/issues/325/events
https://github.com/huggingface/datasets/pull/325
647,601,592
MDExOlB1bGxSZXF1ZXN0NDQxNTk3NTgw
325
Add SQuADShifts dataset
[]
closed
false
null
1
2020-06-29T19:11:16Z
2020-06-30T17:07:31Z
2020-06-30T17:07:31Z
null
This PR adds the four new variants of the SQuAD dataset used in [The Effect of Natural Distribution Shift on Question Answering Models](https://arxiv.org/abs/2004.14444) to facilitate evaluating model robustness to distribution shift.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/325/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/325/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/325.diff", "html_url": "https://github.com/huggingface/datasets/pull/325", "merged_at": "2020-06-30T17:07:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/325.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/325" }
true
[ "Very cool to have this dataset, thank you for adding it :)" ]
https://api.github.com/repos/huggingface/datasets/issues/176
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/176/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/176/comments
https://api.github.com/repos/huggingface/datasets/issues/176/events
https://github.com/huggingface/datasets/pull/176
621,934,638
MDExOlB1bGxSZXF1ZXN0NDIwODkzNDky
176
[Tests] Refactor MockDownloadManager
[]
closed
false
null
0
2020-05-20T17:07:36Z
2020-05-20T18:17:19Z
2020-05-20T18:17:18Z
null
Clean mock download manager class. The print function was not of much help I think. We should think about adding a command that creates the dummy folder structure for the user.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/176/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/176/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/176.diff", "html_url": "https://github.com/huggingface/datasets/pull/176", "merged_at": "2020-05-20T18:17:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/176.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/176" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5924/comments
https://api.github.com/repos/huggingface/datasets/issues/5924/events
https://github.com/huggingface/datasets/pull/5924
1,738,889,236
PR_kwDODunzps5SCiFv
5,924
Add parallel module using joblib for Spark
[]
closed
false
null
7
2023-06-02T22:25:25Z
2023-06-14T10:25:10Z
2023-06-14T10:15:46Z
null
Discussion in https://github.com/huggingface/datasets/issues/5798
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5924/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5924/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5924.diff", "html_url": "https://github.com/huggingface/datasets/pull/5924", "merged_at": "2023-06-14T10:15:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/5924.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5924" }
true
[ "Hi @lhoestq, I added the `parallel` part according to the discussion we had. Could you take a look to see if this is aligned with your proposal?\r\n\r\nMeanwhile I'm working on adding a `parallel_backend` parameter to `load_datasets` so that it can be used like:\r\n```python\r\nwith parallel_backend('spark', steps=['downloading']) as backend:\r\n ds = load_dataset(..., parallel_backend=backend)\r\n```\r\nwhere `parallel_backend` is a `ParallelBackend` class.", "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq Thanks for the comments!\r\nWith your suggestion, no changes made to `load_dataset` and I validated that downloading with spark is working now with this:\r\n```py\r\nwith parallel_backend('spark', steps=[\"download\"]):\r\n dataset = load_dataset(..., num_proc=2)\r\n```", "@lhoestq Can a maintainer help trigger the tests again?\r\n> One idea is to decorate the download method to set the current global step to \"download\", and then only use joblib if the current step is one of the steps provided in parallel_backend.\r\n\r\nYes I think this is doable in a subsequent PR.\r\nFor throwing `NotImplementedError` I also think it can be done in a subsequent PR, because I'm not sure if `Dataset.map` is the only function that a user would expect to run using `with parallel_backend`.", "Just triggered the tests :)\r\n\r\n> Yes I think this is doable in a subsequent PR.\r\nFor throwing NotImplementedError I also think it can be done in a subsequent PR, because I'm not sure if Dataset.map is the only function that a user would expect to run using with parallel_backend.\r\n\r\nI think any Dataset method that has a `num_proc` argument: Dataset.map (the other methods like filter or cast or based on map), and later we can see for the to_xxx methods (to_csv, to_parquet, etc.)", "Hi maintainers, I've just addressed most of the comments, please take another look, thank you.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008422 / 0.011353 (-0.002931) | 0.005658 / 0.011008 (-0.005350) | 0.135372 / 0.038508 (0.096864) | 0.044766 / 0.023109 (0.021657) | 0.417876 / 0.275898 (0.141978) | 0.462785 / 0.323480 (0.139305) | 0.005485 / 0.007986 (-0.002501) | 0.005640 / 0.004328 (0.001311) | 0.105020 / 0.004250 (0.100770) | 0.049114 / 0.037052 (0.012062) | 0.490450 / 0.258489 (0.231961) | 0.467693 / 0.293841 (0.173852) | 0.050929 / 0.128546 (-0.077617) | 0.014644 / 0.075646 (-0.061002) | 0.452373 / 0.419271 (0.033101) | 0.074897 / 0.043533 (0.031364) | 0.425816 / 0.255139 (0.170677) | 0.420415 / 0.283200 (0.137215) | 0.134121 / 0.141683 (-0.007561) | 1.927744 / 1.452155 (0.475589) | 2.014417 / 1.492716 (0.521701) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.254811 / 0.018006 (0.236805) | 0.550011 / 0.000490 (0.549521) | 0.004913 / 0.000200 (0.004714) | 0.000117 / 0.000054 (0.000062) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032644 / 0.037411 (-0.004768) | 0.135672 / 0.014526 (0.121146) | 0.158984 / 0.176557 (-0.017572) | 0.218267 / 0.737135 (-0.518869) | 0.150348 / 0.296338 (-0.145991) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.625723 / 0.215209 (0.410514) | 6.247559 / 2.077655 (4.169905) | 2.626785 / 1.504120 (1.122666) | 2.195224 / 1.541195 (0.654030) | 2.232140 / 1.468490 (0.763650) | 0.943082 / 4.584777 (-3.641695) | 5.799262 / 3.745712 (2.053550) | 2.849411 / 5.269862 (-2.420450) | 1.744160 / 4.565676 (-2.821516) | 0.119056 / 0.424275 (-0.305219) | 0.014233 / 0.007607 (0.006626) | 0.795238 / 0.226044 (0.569194) | 7.569586 / 2.268929 (5.300657) | 3.179481 / 55.444624 (-52.265143) | 2.519772 / 6.876477 (-4.356704) | 2.714570 / 2.142072 (0.572498) | 1.107197 / 4.805227 (-3.698030) | 0.229986 / 6.500664 (-6.270678) | 0.087993 / 0.075469 (0.012524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.535610 / 1.841788 (-0.306178) | 18.639369 / 8.074308 (10.565061) | 21.081844 / 10.191392 (10.890452) | 0.253247 / 0.680424 (-0.427177) | 0.026711 / 0.534201 (-0.507490) | 0.503790 / 0.579283 (-0.075493) | 0.600124 / 0.434364 (0.165760) | 0.617944 / 0.540337 (0.077607) | 0.766947 / 1.386936 (-0.619989) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007885 / 0.011353 (-0.003468) | 0.004761 / 0.011008 (-0.006248) | 0.097995 / 0.038508 (0.059487) | 0.033624 / 0.023109 (0.010515) | 0.504307 / 0.275898 (0.228409) | 0.534803 / 0.323480 (0.211323) | 0.006048 / 0.007986 (-0.001937) | 0.005042 / 0.004328 (0.000714) | 0.102288 / 0.004250 (0.098038) | 0.048695 / 0.037052 (0.011643) | 0.559086 / 0.258489 (0.300597) | 0.553233 / 0.293841 (0.259392) | 0.044596 / 0.128546 (-0.083950) | 0.013696 / 0.075646 (-0.061950) | 0.109875 / 0.419271 (-0.309397) | 0.059993 / 0.043533 (0.016460) | 0.485579 / 0.255139 (0.230440) | 0.519835 / 0.283200 (0.236635) | 0.123504 / 0.141683 (-0.018179) | 1.820506 / 1.452155 (0.368351) | 1.963448 / 1.492716 (0.470732) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292663 / 0.018006 (0.274656) | 0.557783 / 0.000490 (0.557293) | 0.001330 / 0.000200 (0.001130) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036890 / 0.037411 (-0.000522) | 0.140373 / 0.014526 (0.125847) | 0.140176 / 0.176557 (-0.036381) | 0.237378 / 0.737135 (-0.499757) | 0.160186 / 0.296338 (-0.136152) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.673599 / 0.215209 (0.458390) | 6.510280 / 2.077655 (4.432625) | 2.981617 / 1.504120 (1.477497) | 2.684664 / 1.541195 (1.143469) | 2.760471 / 1.468490 (1.291981) | 0.975413 / 4.584777 (-3.609364) | 5.708933 / 3.745712 (1.963220) | 2.772069 / 5.269862 (-2.497793) | 1.763627 / 4.565676 (-2.802049) | 0.111632 / 0.424275 (-0.312643) | 0.013223 / 0.007607 (0.005616) | 0.791545 / 0.226044 (0.565500) | 8.063287 / 2.268929 (5.794359) | 3.671920 / 55.444624 (-51.772704) | 3.057248 / 6.876477 (-3.819229) | 3.083569 / 2.142072 (0.941497) | 1.118136 / 4.805227 (-3.687092) | 0.214655 / 6.500664 (-6.286009) | 0.083074 / 0.075469 (0.007605) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.761731 / 1.841788 (-0.080056) | 18.874200 / 8.074308 (10.799892) | 22.383693 / 10.191392 (12.192301) | 0.240292 / 0.680424 (-0.440132) | 0.028850 / 0.534201 (-0.505351) | 0.557334 / 0.579283 (-0.021949) | 0.627732 / 0.434364 (0.193369) | 0.634484 / 0.540337 (0.094146) | 0.767372 / 1.386936 (-0.619564) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#accaaf2e69fbb5dc5e50229d2eb1591b8ad982b6 \"CML watermark\")\n" ]
https://api.github.com/repos/huggingface/datasets/issues/5221
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5221/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5221/comments
https://api.github.com/repos/huggingface/datasets/issues/5221/events
https://github.com/huggingface/datasets/issues/5221
1,442,309,094
I_kwDODunzps5V9-Pm
5,221
Cannot push
[]
closed
false
null
2
2022-11-09T15:32:05Z
2022-11-10T18:11:21Z
2022-11-10T18:11:11Z
null
### Describe the bug I am facing the issue when I try to push the tar.gz file around 11G to HUB. ``` (venv) ╭─laptop@laptop ~/PersonalProjects/data/ulaanbal_v0 ‹main●› ╰─$ du -sh * 4.0K README.md 13G data 516K test.jsonl 18M train.jsonl 4.0K ulaanbal_v0.py 11G ulaanbal_v0.tar.gz 452K validation.jsonl (venv) ╭─laptop@laptop~/PersonalProjects/data/ulaanbal_v0 ‹main●› ╰─$ git add ulaanbal_v0.tar.gz && git commit -m 'large version' (venv) ╭─laptop@laptop ~/PersonalProjects/data/ulaanbal_v0 ‹main●› ╰─$ git push EOFoading LFS objects: 0% (0/1), 0 B | 0 B/s Uploading LFS objects: 0% (0/1), 0 B | 0 B/s, done. error: failed to push some refs to 'https://huggingface.co/datasets/bayartsogt/ulaanbal_v0' ``` I have already tried pushing a small version of this and it was working fine. So my guess it is probably because of the big file. Following I run before the commit: ``` ╰─$ git lfs install ╰─$ huggingface-cli lfs-enable-largefiles . ``` ### Steps to reproduce the bug Create a private dataset on huggingface and push 12G tar.gz file ### Expected behavior To be pushed with no issue ### Environment info - `datasets` version: 2.6.1 - Platform: Darwin-21.6.0-x86_64-i386-64bit - Python version: 3.7.11 - PyArrow version: 10.0.0 - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5221/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5221/timeline
null
completed
null
null
false
[ "Did you run `huggingface-cli lfs-enable-largefiles` before committing or before adding ? Maybe you can try before adding\r\n\r\nAnyway I'd encourage you to split your data into several TAR archives if possible, this way the dataset can loaded faster using multiprocessing (by giving each process a subset of shards to process)", "@lhoestq \r\nThanks for the help!\r\n> Maybe you can try before adding\r\n\r\nIt did not help\r\n\r\nBut I totally got your point about split into multiple TAR archives. It really helped!" ]
https://api.github.com/repos/huggingface/datasets/issues/2310
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2310/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2310/comments
https://api.github.com/repos/huggingface/datasets/issues/2310/events
https://github.com/huggingface/datasets/pull/2310
875,096,051
MDExOlB1bGxSZXF1ZXN0NjI5NTEwNTg5
2,310
Update README.md
[]
closed
false
null
1
2021-05-04T04:38:01Z
2022-07-06T15:19:58Z
2022-07-06T15:19:58Z
null
Provides description of data instances and dataset features
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2310/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2310/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2310.diff", "html_url": "https://github.com/huggingface/datasets/pull/2310", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2310.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2310" }
true
[ "Hi @cryoff, thanks for completing the dataset card.\r\n\r\nNow there is an automatic validation tool to assure that all dataset cards contain all the relevant information. This is the cause of the non-passing test on your Pull Request:\r\n```\r\nFound fields that are not non-empty list of strings: {'annotations_creators': [], 'language_creators': []}\r\n```" ]
https://api.github.com/repos/huggingface/datasets/issues/1980
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1980/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1980/comments
https://api.github.com/repos/huggingface/datasets/issues/1980/events
https://github.com/huggingface/datasets/pull/1980
821,312,810
MDExOlB1bGxSZXF1ZXN0NTg0MTI1OTUy
1,980
Loading all answers from drop
[]
closed
false
null
2
2021-03-03T17:13:07Z
2021-03-15T11:27:26Z
2021-03-15T11:27:26Z
null
Hello all, I propose this change to the DROP loading script so that all answers are loaded no matter their type. Currently, only "span" answers are loaded, which excludes a significant amount of answers from drop (i.e. "number" and "date"). I updated the script with the version I use for my work. However, I couldn't find a way to verify that all is working when integrated with the datasets repo, since the `load_dataset` method seems to always download the script from github and not local files. Note that 9 items from the train set have no answers, as well as 1 from the validation set. The script I propose simply do not load them. Let me know if there is anything else I can do, Clément
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1980/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1980/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1980.diff", "html_url": "https://github.com/huggingface/datasets/pull/1980", "merged_at": "2021-03-15T11:27:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/1980.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1980" }
true
[ "Nice thanks for the change !\r\nThis looks all good to me\r\n\r\nBefore we merge can you just update the dataset_infos.json file of drop ? You can do it by running\r\n```\r\ndatasets-cli test ./datasets/drop --all_configs --save_infos --ignore_verifications\r\n```", "Done!" ]
https://api.github.com/repos/huggingface/datasets/issues/5540
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5540/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5540/comments
https://api.github.com/repos/huggingface/datasets/issues/5540/events
https://github.com/huggingface/datasets/pull/5540
1,588,438,344
PR_kwDODunzps5KK5qz
5,540
Tutorial for creating a dataset
[]
closed
false
null
2
2023-02-16T22:09:35Z
2023-02-17T18:50:46Z
2023-02-17T18:41:28Z
null
A tutorial for creating datasets based on the folder-based builders and `from_dict` and `from_generator` methods. I've also mentioned loading scripts as a next step, but I think we should keep the tutorial focused on the low-code methods. Let me know what you think! 🙂
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5540/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5540/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5540.diff", "html_url": "https://github.com/huggingface/datasets/pull/5540", "merged_at": "2023-02-17T18:41:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/5540.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5540" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012018 / 0.011353 (0.000665) | 0.006204 / 0.011008 (-0.004804) | 0.134119 / 0.038508 (0.095611) | 0.038436 / 0.023109 (0.015327) | 0.381397 / 0.275898 (0.105499) | 0.456362 / 0.323480 (0.132882) | 0.009826 / 0.007986 (0.001840) | 0.004746 / 0.004328 (0.000417) | 0.103755 / 0.004250 (0.099505) | 0.043867 / 0.037052 (0.006815) | 0.395322 / 0.258489 (0.136833) | 0.475812 / 0.293841 (0.181971) | 0.057865 / 0.128546 (-0.070682) | 0.019919 / 0.075646 (-0.055727) | 0.465343 / 0.419271 (0.046072) | 0.061574 / 0.043533 (0.018041) | 0.371668 / 0.255139 (0.116529) | 0.400375 / 0.283200 (0.117176) | 0.106539 / 0.141683 (-0.035144) | 1.822931 / 1.452155 (0.370776) | 1.875535 / 1.492716 (0.382819) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.013583 / 0.018006 (-0.004423) | 0.535515 / 0.000490 (0.535025) | 0.007920 / 0.000200 (0.007720) | 0.000305 / 0.000054 (0.000250) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030204 / 0.037411 (-0.007207) | 0.131671 / 0.014526 (0.117145) | 0.143977 / 0.176557 (-0.032579) | 0.175498 / 0.737135 (-0.561637) | 0.166134 / 0.296338 (-0.130204) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.630995 / 0.215209 (0.415786) | 6.152275 / 2.077655 (4.074620) | 2.519887 / 1.504120 (1.015767) | 2.110926 / 1.541195 (0.569732) | 2.207555 / 1.468490 (0.739064) | 1.296197 / 4.584777 (-3.288580) | 5.510619 / 3.745712 (1.764906) | 3.167468 / 5.269862 (-2.102394) | 2.043924 / 4.565676 (-2.521753) | 0.144772 / 0.424275 (-0.279503) | 0.014456 / 0.007607 (0.006848) | 0.783629 / 0.226044 (0.557585) | 7.836962 / 2.268929 (5.568033) | 3.248593 / 55.444624 (-52.196032) | 2.577092 / 6.876477 (-4.299385) | 2.671918 / 2.142072 (0.529846) | 1.471586 / 4.805227 (-3.333641) | 0.251391 / 6.500664 (-6.249273) | 0.091947 / 0.075469 (0.016478) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.594839 / 1.841788 (-0.246949) | 18.250630 / 8.074308 (10.176322) | 23.948781 / 10.191392 (13.757389) | 0.275505 / 0.680424 (-0.404919) | 0.045202 / 0.534201 (-0.488999) | 0.545552 / 0.579283 (-0.033731) | 0.639352 / 0.434364 (0.204989) | 0.666345 / 0.540337 (0.126008) | 0.795614 / 1.386936 (-0.591322) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011234 / 0.011353 (-0.000119) | 0.005983 / 0.011008 (-0.005025) | 0.109144 / 0.038508 (0.070636) | 0.036070 / 0.023109 (0.012961) | 0.429313 / 0.275898 (0.153415) | 0.490615 / 0.323480 (0.167135) | 0.007448 / 0.007986 (-0.000538) | 0.004424 / 0.004328 (0.000095) | 0.097100 / 0.004250 (0.092850) | 0.049719 / 0.037052 (0.012667) | 0.412719 / 0.258489 (0.154230) | 0.485717 / 0.293841 (0.191876) | 0.061168 / 0.128546 (-0.067378) | 0.021510 / 0.075646 (-0.054136) | 0.116598 / 0.419271 (-0.302673) | 0.066116 / 0.043533 (0.022583) | 0.426212 / 0.255139 (0.171073) | 0.448368 / 0.283200 (0.165168) | 0.116003 / 0.141683 (-0.025680) | 1.799329 / 1.452155 (0.347175) | 1.967256 / 1.492716 (0.474540) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214893 / 0.018006 (0.196887) | 0.497843 / 0.000490 (0.497354) | 0.000464 / 0.000200 (0.000264) | 0.000094 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031758 / 0.037411 (-0.005653) | 0.131182 / 0.014526 (0.116656) | 0.141251 / 0.176557 (-0.035305) | 0.186526 / 0.737135 (-0.550609) | 0.142975 / 0.296338 (-0.153363) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.662094 / 0.215209 (0.446885) | 6.664841 / 2.077655 (4.587186) | 2.690613 / 1.504120 (1.186493) | 2.305399 / 1.541195 (0.764205) | 2.383697 / 1.468490 (0.915207) | 1.280692 / 4.584777 (-3.304085) | 5.629215 / 3.745712 (1.883503) | 5.007083 / 5.269862 (-0.262778) | 2.482163 / 4.565676 (-2.083513) | 0.147662 / 0.424275 (-0.276613) | 0.017770 / 0.007607 (0.010163) | 0.818380 / 0.226044 (0.592335) | 8.006521 / 2.268929 (5.737592) | 3.472262 / 55.444624 (-51.972363) | 2.709550 / 6.876477 (-4.166926) | 2.775138 / 2.142072 (0.633066) | 1.570545 / 4.805227 (-3.234683) | 0.266323 / 6.500664 (-6.234341) | 0.090591 / 0.075469 (0.015122) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.657927 / 1.841788 (-0.183861) | 18.448981 / 8.074308 (10.374673) | 20.336909 / 10.191392 (10.145517) | 0.230322 / 0.680424 (-0.450102) | 0.025972 / 0.534201 (-0.508229) | 0.561361 / 0.579283 (-0.017922) | 0.623758 / 0.434364 (0.189394) | 0.664120 / 0.540337 (0.123783) | 0.763144 / 1.386936 (-0.623792) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#29de6179766418c937fb33b0cc8803ec24a39e9e \"CML watermark\")\n" ]
https://api.github.com/repos/huggingface/datasets/issues/5814
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5814/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5814/comments
https://api.github.com/repos/huggingface/datasets/issues/5814/events
https://github.com/huggingface/datasets/pull/5814
1,693,216,778
PR_kwDODunzps5PoOQ9
5,814
Repro windows crash
[]
open
false
null
1
2023-05-02T23:30:18Z
2023-05-02T23:47:07Z
null
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5814/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5814/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5814.diff", "html_url": "https://github.com/huggingface/datasets/pull/5814", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5814.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5814" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5814). All of your documentation changes will be reflected on that endpoint." ]
https://api.github.com/repos/huggingface/datasets/issues/5642
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5642/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5642/comments
https://api.github.com/repos/huggingface/datasets/issues/5642/events
https://github.com/huggingface/datasets/pull/5642
1,626,043,177
PR_kwDODunzps5MIjw9
5,642
Bump hfh to 0.11.0
[]
closed
false
null
6
2023-03-15T18:26:07Z
2023-03-20T12:34:09Z
2023-03-20T12:26:58Z
null
to fix errors like ``` requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/... ``` (e.g. from this [failing CI](https://github.com/huggingface/datasets/actions/runs/4428956210/jobs/7769160997)) 0.11.0 is the current minimum version in `transformers` around 5% of users are currently using versions `<0.11.0`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5642/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5642/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5642.diff", "html_url": "https://github.com/huggingface/datasets/pull/5642", "merged_at": "2023-03-20T12:26:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/5642.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5642" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006334 / 0.011353 (-0.005018) | 0.004447 / 0.011008 (-0.006561) | 0.099287 / 0.038508 (0.060779) | 0.027426 / 0.023109 (0.004317) | 0.322638 / 0.275898 (0.046740) | 0.370501 / 0.323480 (0.047021) | 0.004775 / 0.007986 (-0.003210) | 0.003289 / 0.004328 (-0.001040) | 0.076531 / 0.004250 (0.072280) | 0.037485 / 0.037052 (0.000432) | 0.335634 / 0.258489 (0.077145) | 0.384031 / 0.293841 (0.090190) | 0.031258 / 0.128546 (-0.097288) | 0.011619 / 0.075646 (-0.064027) | 0.326309 / 0.419271 (-0.092963) | 0.042513 / 0.043533 (-0.001020) | 0.340817 / 0.255139 (0.085678) | 0.369846 / 0.283200 (0.086646) | 0.084904 / 0.141683 (-0.056779) | 1.481739 / 1.452155 (0.029584) | 1.566593 / 1.492716 (0.073877) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.186424 / 0.018006 (0.168418) | 0.400879 / 0.000490 (0.400389) | 0.003520 / 0.000200 (0.003320) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023287 / 0.037411 (-0.014124) | 0.097767 / 0.014526 (0.083241) | 0.103271 / 0.176557 (-0.073286) | 0.165414 / 0.737135 (-0.571722) | 0.106437 / 0.296338 (-0.189901) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422711 / 0.215209 (0.207502) | 4.221382 / 2.077655 (2.143727) | 1.906807 / 1.504120 (0.402687) | 1.709595 / 1.541195 (0.168400) | 1.720452 / 1.468490 (0.251962) | 0.699477 / 4.584777 (-3.885300) | 3.415840 / 3.745712 (-0.329873) | 2.835669 / 5.269862 (-2.434192) | 1.501775 / 4.565676 (-3.063901) | 0.082896 / 0.424275 (-0.341379) | 0.012855 / 0.007607 (0.005248) | 0.514373 / 0.226044 (0.288329) | 5.190000 / 2.268929 (2.921071) | 2.302539 / 55.444624 (-53.142086) | 1.963410 / 6.876477 (-4.913067) | 2.020944 / 2.142072 (-0.121128) | 0.805919 / 4.805227 (-3.999308) | 0.150604 / 6.500664 (-6.350060) | 0.065977 / 0.075469 (-0.009492) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206487 / 1.841788 (-0.635300) | 13.631513 / 8.074308 (5.557205) | 13.800258 / 10.191392 (3.608866) | 0.146914 / 0.680424 (-0.533509) | 0.016454 / 0.534201 (-0.517747) | 0.377752 / 0.579283 (-0.201532) | 0.384312 / 0.434364 (-0.050052) | 0.434912 / 0.540337 (-0.105425) | 0.522507 / 1.386936 (-0.864429) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006328 / 0.011353 (-0.005025) | 0.004406 / 0.011008 (-0.006602) | 0.077951 / 0.038508 (0.039443) | 0.026716 / 0.023109 (0.003607) | 0.337303 / 0.275898 (0.061405) | 0.372036 / 0.323480 (0.048556) | 0.004800 / 0.007986 (-0.003185) | 0.003153 / 0.004328 (-0.001175) | 0.076823 / 0.004250 (0.072573) | 0.035873 / 0.037052 (-0.001179) | 0.340243 / 0.258489 (0.081754) | 0.380183 / 0.293841 (0.086342) | 0.032185 / 0.128546 (-0.096361) | 0.011545 / 0.075646 (-0.064101) | 0.086887 / 0.419271 (-0.332384) | 0.041560 / 0.043533 (-0.001973) | 0.338716 / 0.255139 (0.083577) | 0.363080 / 0.283200 (0.079881) | 0.088375 / 0.141683 (-0.053308) | 1.499004 / 1.452155 (0.046850) | 1.585904 / 1.492716 (0.093188) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211645 / 0.018006 (0.193639) | 0.403707 / 0.000490 (0.403218) | 0.000415 / 0.000200 (0.000215) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024972 / 0.037411 (-0.012440) | 0.097996 / 0.014526 (0.083470) | 0.105941 / 0.176557 (-0.070616) | 0.155521 / 0.737135 (-0.581615) | 0.108246 / 0.296338 (-0.188092) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442316 / 0.215209 (0.227107) | 4.417977 / 2.077655 (2.340322) | 2.078324 / 1.504120 (0.574205) | 1.863678 / 1.541195 (0.322483) | 1.917149 / 1.468490 (0.448659) | 0.697628 / 4.584777 (-3.887149) | 3.412810 / 3.745712 (-0.332902) | 1.866473 / 5.269862 (-3.403389) | 1.155923 / 4.565676 (-3.409754) | 0.082831 / 0.424275 (-0.341444) | 0.012367 / 0.007607 (0.004760) | 0.540018 / 0.226044 (0.313974) | 5.420472 / 2.268929 (3.151544) | 2.508540 / 55.444624 (-52.936084) | 2.166397 / 6.876477 (-4.710080) | 2.153486 / 2.142072 (0.011414) | 0.804860 / 4.805227 (-4.000367) | 0.151178 / 6.500664 (-6.349486) | 0.067870 / 0.075469 (-0.007599) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.310387 / 1.841788 (-0.531400) | 13.908916 / 8.074308 (5.834608) | 14.136895 / 10.191392 (3.945503) | 0.139389 / 0.680424 (-0.541035) | 0.016687 / 0.534201 (-0.517514) | 0.379624 / 0.579283 (-0.199659) | 0.382634 / 0.434364 (-0.051730) | 0.439632 / 0.540337 (-0.100706) | 0.524913 / 1.386936 (-0.862023) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f8f2143b4ed39b58ed415029e7838d767662da91 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006365 / 0.011353 (-0.004988) | 0.004457 / 0.011008 (-0.006551) | 0.097989 / 0.038508 (0.059481) | 0.027686 / 0.023109 (0.004577) | 0.357412 / 0.275898 (0.081514) | 0.368573 / 0.323480 (0.045093) | 0.004859 / 0.007986 (-0.003127) | 0.003262 / 0.004328 (-0.001066) | 0.076487 / 0.004250 (0.072237) | 0.035526 / 0.037052 (-0.001527) | 0.332862 / 0.258489 (0.074373) | 0.369334 / 0.293841 (0.075493) | 0.030750 / 0.128546 (-0.097796) | 0.011503 / 0.075646 (-0.064143) | 0.323289 / 0.419271 (-0.095982) | 0.042302 / 0.043533 (-0.001231) | 0.334009 / 0.255139 (0.078870) | 0.354150 / 0.283200 (0.070951) | 0.082895 / 0.141683 (-0.058788) | 1.499727 / 1.452155 (0.047572) | 1.574123 / 1.492716 (0.081407) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192583 / 0.018006 (0.174577) | 0.408136 / 0.000490 (0.407646) | 0.001272 / 0.000200 (0.001072) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022883 / 0.037411 (-0.014528) | 0.095710 / 0.014526 (0.081185) | 0.106545 / 0.176557 (-0.070011) | 0.165784 / 0.737135 (-0.571352) | 0.108594 / 0.296338 (-0.187744) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429483 / 0.215209 (0.214274) | 4.292338 / 2.077655 (2.214683) | 1.917759 / 1.504120 (0.413639) | 1.711489 / 1.541195 (0.170294) | 1.735668 / 1.468490 (0.267178) | 0.707602 / 4.584777 (-3.877175) | 3.369643 / 3.745712 (-0.376070) | 1.874517 / 5.269862 (-3.395344) | 1.248560 / 4.565676 (-3.317117) | 0.083247 / 0.424275 (-0.341028) | 0.012606 / 0.007607 (0.004999) | 0.519342 / 0.226044 (0.293297) | 5.225462 / 2.268929 (2.956533) | 2.433230 / 55.444624 (-53.011394) | 2.006005 / 6.876477 (-4.870471) | 2.093156 / 2.142072 (-0.048916) | 0.809372 / 4.805227 (-3.995855) | 0.151691 / 6.500664 (-6.348973) | 0.066680 / 0.075469 (-0.008789) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.226283 / 1.841788 (-0.615505) | 13.604338 / 8.074308 (5.530030) | 13.953245 / 10.191392 (3.761853) | 0.132904 / 0.680424 (-0.547520) | 0.016420 / 0.534201 (-0.517781) | 0.395316 / 0.579283 (-0.183967) | 0.385003 / 0.434364 (-0.049361) | 0.483303 / 0.540337 (-0.057034) | 0.578459 / 1.386936 (-0.808477) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006218 / 0.011353 (-0.005135) | 0.004451 / 0.011008 (-0.006557) | 0.076892 / 0.038508 (0.038384) | 0.027017 / 0.023109 (0.003908) | 0.356976 / 0.275898 (0.081078) | 0.396083 / 0.323480 (0.072603) | 0.005510 / 0.007986 (-0.002476) | 0.003265 / 0.004328 (-0.001063) | 0.075771 / 0.004250 (0.071521) | 0.037117 / 0.037052 (0.000064) | 0.362181 / 0.258489 (0.103692) | 0.401771 / 0.293841 (0.107931) | 0.032062 / 0.128546 (-0.096484) | 0.011453 / 0.075646 (-0.064194) | 0.085773 / 0.419271 (-0.333498) | 0.041679 / 0.043533 (-0.001854) | 0.355120 / 0.255139 (0.099981) | 0.390170 / 0.283200 (0.106970) | 0.088210 / 0.141683 (-0.053473) | 1.526434 / 1.452155 (0.074279) | 1.586019 / 1.492716 (0.093302) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196836 / 0.018006 (0.178830) | 0.401161 / 0.000490 (0.400671) | 0.002880 / 0.000200 (0.002680) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024445 / 0.037411 (-0.012966) | 0.100187 / 0.014526 (0.085661) | 0.106391 / 0.176557 (-0.070165) | 0.159764 / 0.737135 (-0.577372) | 0.109828 / 0.296338 (-0.186511) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444228 / 0.215209 (0.229018) | 4.420769 / 2.077655 (2.343114) | 2.069437 / 1.504120 (0.565318) | 1.862587 / 1.541195 (0.321392) | 1.934627 / 1.468490 (0.466137) | 0.699681 / 4.584777 (-3.885095) | 3.352540 / 3.745712 (-0.393172) | 2.613172 / 5.269862 (-2.656689) | 1.445116 / 4.565676 (-3.120561) | 0.083086 / 0.424275 (-0.341189) | 0.012715 / 0.007607 (0.005108) | 0.537450 / 0.226044 (0.311405) | 5.403052 / 2.268929 (3.134123) | 2.506703 / 55.444624 (-52.937921) | 2.170198 / 6.876477 (-4.706279) | 2.201909 / 2.142072 (0.059837) | 0.799555 / 4.805227 (-4.005672) | 0.150825 / 6.500664 (-6.349839) | 0.067234 / 0.075469 (-0.008235) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.293097 / 1.841788 (-0.548691) | 13.817133 / 8.074308 (5.742825) | 14.247231 / 10.191392 (4.055839) | 0.128422 / 0.680424 (-0.552002) | 0.016541 / 0.534201 (-0.517660) | 0.382466 / 0.579283 (-0.196817) | 0.380560 / 0.434364 (-0.053804) | 0.439061 / 0.540337 (-0.101276) | 0.521865 / 1.386936 (-0.865071) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#69e60be438c334919f590512fd664436bd6b3667 \"CML watermark\")\n", "I also took the liberty of removing `_hf_hub_fixes.py` completely :)\r\n\r\n> Do you think this is really necessary and convenient? I would naively say that 5% of the users is not a negligible number...\r\n\r\nI think it's ok. Most of them are using old versions of `datasets` anyway.\r\n\r\n", "merging, but lmk if you have other concerns", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006810 / 0.011353 (-0.004543) | 0.004683 / 0.011008 (-0.006325) | 0.100889 / 0.038508 (0.062381) | 0.030135 / 0.023109 (0.007026) | 0.356407 / 0.275898 (0.080509) | 0.389175 / 0.323480 (0.065695) | 0.005358 / 0.007986 (-0.002627) | 0.004760 / 0.004328 (0.000432) | 0.075904 / 0.004250 (0.071654) | 0.040341 / 0.037052 (0.003288) | 0.357363 / 0.258489 (0.098874) | 0.394185 / 0.293841 (0.100344) | 0.031322 / 0.128546 (-0.097224) | 0.011636 / 0.075646 (-0.064010) | 0.327327 / 0.419271 (-0.091944) | 0.042494 / 0.043533 (-0.001039) | 0.338079 / 0.255139 (0.082940) | 0.363388 / 0.283200 (0.080189) | 0.087102 / 0.141683 (-0.054581) | 1.505686 / 1.452155 (0.053531) | 1.562112 / 1.492716 (0.069396) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203630 / 0.018006 (0.185624) | 0.425986 / 0.000490 (0.425496) | 0.003786 / 0.000200 (0.003586) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024138 / 0.037411 (-0.013274) | 0.101752 / 0.014526 (0.087226) | 0.105436 / 0.176557 (-0.071121) | 0.165385 / 0.737135 (-0.571750) | 0.114510 / 0.296338 (-0.181828) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.447561 / 0.215209 (0.232352) | 4.449212 / 2.077655 (2.371557) | 2.169472 / 1.504120 (0.665352) | 1.989025 / 1.541195 (0.447831) | 2.036267 / 1.468490 (0.567776) | 0.698647 / 4.584777 (-3.886130) | 3.483281 / 3.745712 (-0.262431) | 1.949306 / 5.269862 (-3.320555) | 1.290313 / 4.565676 (-3.275363) | 0.083079 / 0.424275 (-0.341196) | 0.012759 / 0.007607 (0.005152) | 0.540944 / 0.226044 (0.314899) | 5.473391 / 2.268929 (3.204463) | 2.632037 / 55.444624 (-52.812587) | 2.327396 / 6.876477 (-4.549081) | 2.428880 / 2.142072 (0.286808) | 0.808918 / 4.805227 (-3.996309) | 0.153283 / 6.500664 (-6.347381) | 0.068325 / 0.075469 (-0.007145) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212527 / 1.841788 (-0.629260) | 14.306444 / 8.074308 (6.232136) | 14.904980 / 10.191392 (4.713588) | 0.142796 / 0.680424 (-0.537628) | 0.016829 / 0.534201 (-0.517372) | 0.384806 / 0.579283 (-0.194477) | 0.390505 / 0.434364 (-0.043859) | 0.441734 / 0.540337 (-0.098603) | 0.526159 / 1.386936 (-0.860777) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006950 / 0.011353 (-0.004403) | 0.004647 / 0.011008 (-0.006362) | 0.078925 / 0.038508 (0.040417) | 0.028081 / 0.023109 (0.004971) | 0.343420 / 0.275898 (0.067522) | 0.380567 / 0.323480 (0.057087) | 0.005286 / 0.007986 (-0.002700) | 0.004816 / 0.004328 (0.000487) | 0.077332 / 0.004250 (0.073081) | 0.042131 / 0.037052 (0.005078) | 0.345371 / 0.258489 (0.086882) | 0.390232 / 0.293841 (0.096392) | 0.032395 / 0.128546 (-0.096152) | 0.011669 / 0.075646 (-0.063978) | 0.087649 / 0.419271 (-0.331622) | 0.042465 / 0.043533 (-0.001068) | 0.342863 / 0.255139 (0.087724) | 0.368947 / 0.283200 (0.085748) | 0.091725 / 0.141683 (-0.049958) | 1.477435 / 1.452155 (0.025280) | 1.563449 / 1.492716 (0.070733) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208016 / 0.018006 (0.190010) | 0.428387 / 0.000490 (0.427898) | 0.000443 / 0.000200 (0.000243) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026963 / 0.037411 (-0.010449) | 0.103854 / 0.014526 (0.089328) | 0.109068 / 0.176557 (-0.067488) | 0.160107 / 0.737135 (-0.577028) | 0.112843 / 0.296338 (-0.183496) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437161 / 0.215209 (0.221952) | 4.396178 / 2.077655 (2.318523) | 2.067597 / 1.504120 (0.563477) | 1.875247 / 1.541195 (0.334053) | 1.962451 / 1.468490 (0.493961) | 0.701427 / 4.584777 (-3.883350) | 3.459564 / 3.745712 (-0.286148) | 1.959482 / 5.269862 (-3.310380) | 1.191866 / 4.565676 (-3.373810) | 0.083243 / 0.424275 (-0.341032) | 0.012740 / 0.007607 (0.005133) | 0.535236 / 0.226044 (0.309191) | 5.351715 / 2.268929 (3.082786) | 2.490868 / 55.444624 (-52.953756) | 2.195680 / 6.876477 (-4.680797) | 2.233854 / 2.142072 (0.091781) | 0.809041 / 4.805227 (-3.996187) | 0.151498 / 6.500664 (-6.349166) | 0.068297 / 0.075469 (-0.007172) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303596 / 1.841788 (-0.538192) | 14.712746 / 8.074308 (6.638438) | 14.778412 / 10.191392 (4.587020) | 0.147093 / 0.680424 (-0.533331) | 0.017105 / 0.534201 (-0.517096) | 0.381687 / 0.579283 (-0.197596) | 0.402435 / 0.434364 (-0.031929) | 0.453538 / 0.540337 (-0.086800) | 0.538866 / 1.386936 (-0.848070) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#10f637c3a598c8042865b31f779e315a3da5337e \"CML watermark\")\n" ]
https://api.github.com/repos/huggingface/datasets/issues/1588
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1588/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1588/comments
https://api.github.com/repos/huggingface/datasets/issues/1588/events
https://github.com/huggingface/datasets/pull/1588
769,068,227
MDExOlB1bGxSZXF1ZXN0NTQxMjg3OTcz
1,588
Modified hind encorp
[]
closed
false
null
1
2020-12-16T16:28:14Z
2020-12-16T22:41:53Z
2020-12-16T17:20:28Z
null
description added, unnecessary comments removed from .py and readme.md reformated @lhoestq for #1584
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1588/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1588/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1588.diff", "html_url": "https://github.com/huggingface/datasets/pull/1588", "merged_at": "2020-12-16T17:20:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/1588.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1588" }
true
[ "welcome, awesome " ]
https://api.github.com/repos/huggingface/datasets/issues/2285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2285/comments
https://api.github.com/repos/huggingface/datasets/issues/2285/events
https://github.com/huggingface/datasets/issues/2285
871,005,236
MDU6SXNzdWU4NzEwMDUyMzY=
2,285
Help understanding how to build a dataset for language modeling as with the old TextDataset
[]
closed
false
null
2
2021-04-29T13:16:45Z
2021-05-19T07:22:45Z
2021-05-19T07:22:39Z
null
Hello, I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers. I would like to understand what is the process to build a text dataset that tokenizes each line, having previously split the documents in the dataset into lines of a "tokenizable" size, as the old TextDataset class would do, where you only had to do the following, and a tokenized dataset without text loss would be available to pass to a DataCollator: ``` model_checkpoint = 'distilbert-base-uncased' from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) from transformers import TextDataset dataset = TextDataset( tokenizer=tokenizer, file_path="path/to/text_file.txt", block_size=512, ) ``` For now, what I have is the following, which, of course, throws an error because each line is longer than the maximum block size in the tokenizer: ``` import datasets dataset = datasets.load_dataset('path/to/text_file.txt') model_checkpoint = 'distilbert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) def tokenize_function(examples): return tokenizer(examples["text"]) tokenized_datasets = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"]) tokenized_datasets ``` So what would be the "standard" way of creating a dataset in the way it was done before? Thank you very much for the help :))
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2285/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2285/timeline
null
completed
null
null
false
[ "\r\nI received an answer for this question on the HuggingFace Datasets forum by @lhoestq\r\n\r\nHi !\r\n\r\nIf you want to tokenize line by line, you can use this:\r\n\r\n```\r\nmax_seq_length = 512\r\nnum_proc = 4\r\n\r\ndef tokenize_function(examples):\r\n# Remove empty lines\r\nexamples[\"text\"] = [line for line in examples[\"text\"] if len(line) > 0 and not line.isspace()]\r\nreturn tokenizer(\r\n examples[\"text\"],\r\n truncation=True,\r\n max_length=max_seq_length,\r\n)\r\n\r\ntokenized_dataset = dataset.map(\r\ntokenize_function,\r\nbatched=True,\r\nnum_proc=num_proc,\r\nremove_columns=[\"text\"],\r\n)\r\n```\r\n\r\nThough the TextDataset was doing a different processing by concatenating all the texts and building blocks of size 512. If you need this behavior, then you must apply an additional map function after the tokenization:\r\n\r\n```\r\n# Main data processing function that will concatenate all texts from\r\n# our dataset and generate chunks of max_seq_length.\r\ndef group_texts(examples):\r\n# Concatenate all texts.\r\nconcatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\r\ntotal_length = len(concatenated_examples[list(examples.keys())[0]])\r\n# We drop the small remainder, we could add padding if the model supported it instead of this drop,\r\n# you can customize this part to your needs.\r\ntotal_length = (total_length // max_seq_length) * max_seq_length\r\n# Split by chunks of max_len.\r\nresult = {\r\n k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]\r\n for k, t in concatenated_examples.items()\r\n}\r\nreturn result\r\n\r\n# Note that with `batched=True`, this map processes 1,000 texts together,\r\n# so group_texts throws away a remainder for each of those groups of 1,000 texts.\r\n# You can adjust that batch_size here but a higher value might be slower to preprocess.\r\n\r\ntokenized_dataset = tokenized_dataset.map(\r\ngroup_texts,\r\nbatched=True,\r\nnum_proc=num_proc,\r\n)\r\n```\r\n\r\nThis code comes from the processing of the run_mlm.py example script of transformers\r\n\r\n", "Resolved" ]
https://api.github.com/repos/huggingface/datasets/issues/6025
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6025/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6025/comments
https://api.github.com/repos/huggingface/datasets/issues/6025/events
https://github.com/huggingface/datasets/issues/6025
1,801,852,601
I_kwDODunzps5rZha5
6,025
Using a dataset for a use other than it was intended for.
[]
closed
false
null
1
2023-07-12T22:33:17Z
2023-07-13T13:57:36Z
2023-07-13T13:57:36Z
null
### Describe the bug Hi, I want to use the rotten tomatoes dataset but for a task other than classification, but when I interleave the dataset, it throws ```'ValueError: Column label is not present in features.'```. It seems that the label_col must be there in the dataset for some reason? Here is the full stacktrace ``` File "/home/suryahari/Vornoi/tryage-handoff-other-datasets.py", line 276, in create_dataloaders dataset = interleave_datasets(dsfold, stopping_strategy="all_exhausted") File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py", line 134, in interleave_datasets return _interleave_iterable_datasets( File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1833, in _interleave_iterable_datasets info = DatasetInfo.from_merge([d.info for d in datasets]) File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 275, in from_merge dataset_infos = [dset_info.copy() for dset_info in dataset_infos if dset_info is not None] File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 275, in <listcomp> dataset_infos = [dset_info.copy() for dset_info in dataset_infos if dset_info is not None] File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 378, in copy return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()}) File "<string>", line 20, in __init__ File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 208, in __post_init__ self.task_templates = [ File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 209, in <listcomp> template.align_with_features(self.features) for template in (self.task_templates) File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/tasks/text_classification.py", line 20, in align_with_features raise ValueError(f"Column {self.label_column} is not present in features.") ValueError: Column label is not present in features. ``` ### Steps to reproduce the bug Delete the column `labels` from the `rotten_tomatoes` dataset. Try to interleave it with other datasets. ### Expected behavior Should let me use the dataset with just the `text` field ### Environment info latest datasets library? I don't think this was an issue in earlier versions.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6025/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6025/timeline
null
completed
null
null
false
[ "I've opened a PR with a fix. In the meantime, you can avoid the error by deleting `task_templates` with `dataset.info.task_templates = None` before the `interleave_datasets` call.\r\n` " ]
https://api.github.com/repos/huggingface/datasets/issues/3763
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3763/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3763/comments
https://api.github.com/repos/huggingface/datasets/issues/3763/events
https://github.com/huggingface/datasets/issues/3763
1,145,099,878
I_kwDODunzps5EQNZm
3,763
It's not possible download `20200501.pt` dataset
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
2
2022-02-20T18:34:58Z
2022-02-21T12:06:12Z
2022-02-21T09:25:06Z
null
## Describe the bug The dataset `20200501.pt` is broken. The available datasets: https://dumps.wikimedia.org/ptwiki/ ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner') ``` ## Expected results I expect to download the dataset locally. ## Actual results ``` >>> from datasets import load_dataset >>> dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner') Downloading and preparing dataset wikipedia/20200501.pt to /home/jvanz/.cache/huggingface/datasets/wikipedia/20200501.pt/1.0.0/009f923d9b6dd00c00c8cdc7f408f2b47f45dd4f5fb7982a21f9448f4afbe475... /home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/apache_beam/__init__.py:79: UserWarning: This version of Apache Beam has not been sufficiently tested on Python 3.9. You may encounter bugs or missing features. warnings.warn( 0%| | 0/1 [00:00<?, ?it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset builder_instance.download_and_prepare( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare self._download_and_prepare( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 1245, in _download_and_prepare super()._download_and_prepare( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 661, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/jvanz/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/009f923d9b6dd00c00c8cdc7f408f2b47f45dd4f5fb7982a21f9448f4afbe475/wikipedia.py", line 420, in _split_generators downloaded_files = dl_manager.download_and_extract({"info": info_url}) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 307, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 195, in download downloaded_path_or_paths = map_nested( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 260, in map_nested mapped = [ File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 261, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 196, in _single_map_nested return function(data_struct) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 216, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 298, in cached_path output_path = get_from_cache( File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 612, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/ptwiki/20200501/dumpstatus.json ``` ## Environment info ``` - `datasets` version: 1.18.3 - Platform: Linux-5.3.18-150300.59.49-default-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 6.0.1 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3763/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3763/timeline
null
completed
null
null
false
[ "Hi @jvanz, thanks for reporting.\r\n\r\nPlease note that Wikimedia website does not longer host Wikipedia dumps for so old dates.\r\n\r\nFor a list of accessible dump dates of `pt` Wikipedia, please see: https://dumps.wikimedia.org/ptwiki/\r\n\r\nYou can load for example `20220220` `pt` Wikipedia:\r\n```python\r\ndataset = load_dataset(\"wikipedia\", language=\"pt\", date=\"20220220\", beam_runner=\"DirectRunner\")\r\n```", "> ```python\r\n> dataset = load_dataset(\"wikipedia\", language=\"pt\", date=\"20220220\", beam_runner=\"DirectRunner\")\r\n> ```\r\n\r\nThank you! I did not know that I can do this. I was following the example in the error message when I do not define which language dataset I'm trying to download.\r\n\r\nI've tried something similar changing the date in the `load_dataset` call that I've shared in the bug description. Obviously, it did not work. I need to read the docs more carefully next time. My bad!\r\n\r\nThanks again and sorry for the noise.\r\n\r\n" ]
https://api.github.com/repos/huggingface/datasets/issues/4683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4683/comments
https://api.github.com/repos/huggingface/datasets/issues/4683/events
https://github.com/huggingface/datasets/pull/4683
1,305,443,253
PR_kwDODunzps47cLkm
4,683
Update create dataset card docs
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
1
2022-07-15T00:41:29Z
2022-07-18T17:26:00Z
2022-07-18T13:24:10Z
null
This PR proposes removing the [online dataset card creator](https://huggingface.co/datasets/card-creator/) in favor of simply copy/pasting a template and using the [Datasets Tagger app](https://huggingface.co/spaces/huggingface/datasets-tagging) to generate the tags. The Tagger app provides more guidance by showing all possible values a user can select in the dropdown menus, whereas the online dataset card creator doesn't, which can make it difficult to know what tag values to input. Let me know what you think! :)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4683/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4683/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4683.diff", "html_url": "https://github.com/huggingface/datasets/pull/4683", "merged_at": "2022-07-18T13:24:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/4683.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4683" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/5919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5919/comments
https://api.github.com/repos/huggingface/datasets/issues/5919/events
https://github.com/huggingface/datasets/pull/5919
1,735,519,227
PR_kwDODunzps5R2_EK
5,919
add support for storage_options for load_dataset API
[]
closed
false
null
12
2023-06-01T05:52:32Z
2023-07-18T06:14:32Z
2023-07-17T17:02:00Z
null
to solve the issue in #5880 1. add s3 support in the link check step, previous we only check `http` and `https`, 2. change the parameter of `use_auth_token` to `download_config` to support both `storage_options` and `use_auth_token` parameter when trying to handle(list, open, read, etc,.) the remote files. 3. integrate the check part's duplicate code to make adding or deleting other sources easier.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5919/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5919/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5919.diff", "html_url": "https://github.com/huggingface/datasets/pull/5919", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5919.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5919" }
true
[ "hi @lhoestq,\r\nI saw some errors in my test and found all the failed reasons are `FileNotFoundError` about `test_load_streaming_private_dataset_with_zipped_data` and `test_load_dataset_private_zipped_images` in `test_load.py `, I run pytest on my own Wins and Ubuntu system all the test in `test_load.py ` are succeed. could you help me to check the test environment of our server?\r\n\r\n`2023-06-08T16:50:48.0828281Z FAILED tests/test_load.py::test_load_streaming_private_dataset_with_zipped_data - FileNotFoundError: Couldn't find a dataset script at D:\\a\\datasets\\datasets\\__DUMMY_TRANSFORMERS_USER__\\repo_zipped_txt_data-16862429577813\\repo_zipped_txt_data-16862429577813.py or any data file in the same directory. Couldn't find '__DUMMY_TRANSFORMERS_USER__/repo_zipped_txt_data-16862429577813' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in __DUMMY_TRANSFORMERS_USER__/repo_zipped_txt_data-16862429577813`\r\n`2023-06-08T16:50:48.0830602Z FAILED tests/test_load.py::test_load_dataset_private_zipped_images[False-False] - FileNotFoundError: Couldn't find a dataset script at D:\\a\\datasets\\datasets\\__DUMMY_TRANSFORMERS_USER__\\repo_zipped_img_data-16862429594168\\repo_zipped_img_data-16862429594168.py or any data file in the same directory. Couldn't find '__DUMMY_TRANSFORMERS_USER__/repo_zipped_img_data-16862429594168' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in __DUMMY_TRANSFORMERS_USER__/repo_zipped_img_data-16862429594168`", "I just re-ran the CI, hopefully it's fixed", "_The documentation is not available anymore as the PR was closed or merged._", "> I just re-ran the CI, hopefully it's fixed\r\n\r\nI just checked, still has the same error, maybe need someone to fix it", "I think the issue comes from this PR somehow, since the CI fail is related to loading private repositories and this PR touches authentication related code. Let me check what's the issue, and I'll also review your PR later (sorry I don't have a ton of bandwidth atm)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5919). All of your documentation changes will be reflected on that endpoint.", "@lhoestq Hi sorry to bother you, the CI check_code_quality failed and it said `would reformat /home/runner/work/datasets/datasets/src/datasets/download/streaming_download_manager.py` but I cant see any changes when I run `python3 -m black --check tests src benchmarks metrics` and `python3 -m ruff tests src benchmarks metrics` on my own computer, is there any version requirements on the tools? I didn't specific the version.", "I just ran `make style` and pushed the changes.\r\nYou can install the right versions of black and ruff using `pip install -e .[quality]` ;)", "I am working on this issue right now https://github.com/huggingface/datasets/issues/6017 which is strongly connected to your PR, and I might end up cherry-picking some of your commits (keeping attribution of course !). Would you be ok with that ?", "it's totally ok for me, I just wish the S3 File system could support streaming too.\r\n", "\r\nI already adjust the code and test on my local Mac, you can check it now, and you can make any changes to it.", "Closing this PR in favor of https://github.com/huggingface/datasets/pull/6028 which includes your contribution :)" ]
https://api.github.com/repos/huggingface/datasets/issues/3470
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3470/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3470/comments
https://api.github.com/repos/huggingface/datasets/issues/3470/events
https://github.com/huggingface/datasets/pull/3470
1,086,049,888
PR_kwDODunzps4wJO8t
3,470
Fix rendering of docs
[]
closed
false
null
0
2021-12-21T17:17:01Z
2021-12-22T09:23:47Z
2021-12-22T09:23:47Z
null
Minor fix in docs. Currently, `ClassLabel` docstring rendering is not right.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3470/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3470/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3470.diff", "html_url": "https://github.com/huggingface/datasets/pull/3470", "merged_at": "2021-12-22T09:23:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/3470.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3470" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/860
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/860/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/860/comments
https://api.github.com/repos/huggingface/datasets/issues/860/events
https://github.com/huggingface/datasets/issues/860
744,750,691
MDU6SXNzdWU3NDQ3NTA2OTE=
860
wmt16 cs-en does not donwload
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
1
2020-11-17T13:45:35Z
2022-10-05T12:27:00Z
2022-10-05T12:26:59Z
null
Hi I am trying with wmt16, cs-en pair, thanks for the help, perhaps similar to the ro-en issue. thanks split="train", n_obs=data_args.n_train) for task in data_args.task} File "finetune_t5_trainer.py", line 109, in <dictcomp> split="train", n_obs=data_args.n_train) for task in data_args.task} File "/home/rabeeh/internship/seq2seq/tasks/tasks.py", line 82, in get_dataset dataset = load_dataset("wmt16", self.pair, split=split) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/rabeeh/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/860/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/860/timeline
null
completed
null
null
false
[ "We know host this file, so downloading should be more robust." ]
https://api.github.com/repos/huggingface/datasets/issues/4771
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4771/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4771/comments
https://api.github.com/repos/huggingface/datasets/issues/4771/events
https://github.com/huggingface/datasets/pull/4771
1,322,600,725
PR_kwDODunzps48VjWx
4,771
Remove dummy data generation docs
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
1
2022-07-29T19:20:46Z
2022-08-03T00:04:01Z
2022-08-02T23:50:29Z
null
This PR removes instructions to generate dummy data since that is no longer necessary for datasets that are uploaded to the Hub instead of our GitHub repo. Close #4744
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4771/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4771/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4771.diff", "html_url": "https://github.com/huggingface/datasets/pull/4771", "merged_at": "2022-08-02T23:50:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/4771.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4771" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/2162
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2162/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2162/comments
https://api.github.com/repos/huggingface/datasets/issues/2162/events
https://github.com/huggingface/datasets/issues/2162
849,129,201
MDU6SXNzdWU4NDkxMjkyMDE=
2,162
visualization for cc100 is broken
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
closed
false
null
3
2021-04-02T10:11:13Z
2022-10-05T13:20:24Z
2022-10-05T13:20:24Z
null
Hi visualization through dataset viewer for cc100 is broken https://huggingface.co/datasets/viewer/ thanks a lot
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2162/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2162/timeline
null
completed
null
null
false
[ "This looks like an issue with the cc100 dataset itself but not sure\r\nDid you try loading cc100 on your machine ?", "Hi\nloading works fine, but the viewer only is broken\nthanks\n\nOn Wed, Apr 7, 2021 at 12:17 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> This looks like an issue with the cc100 dataset itself but not sure\n> Did you try loading cc100 on your machine ?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2162#issuecomment-814793809>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMRUO33JSOYGT6RETWLTHQWNLANCNFSM42IUOR6Q>\n> .\n>\n", "Hi! This visualization tool is deprecated now. The viewer at https://huggingface.co/datasets/cc100 works fine, so I'm closing this issue." ]
https://api.github.com/repos/huggingface/datasets/issues/517
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/517/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/517/comments
https://api.github.com/repos/huggingface/datasets/issues/517/events
https://github.com/huggingface/datasets/issues/517
681,896,944
MDU6SXNzdWU2ODE4OTY5NDQ=
517
add MLDoc dataset
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
2
2020-08-19T14:41:59Z
2021-08-03T05:59:33Z
null
null
Hi, I am recommending that someone add MLDoc, a multilingual news topic classification dataset. - Here's a link to the Github: https://github.com/facebookresearch/MLDoc - and the paper: http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf Looks like the dataset contains news stories in multiple languages that can be classified into four hierarchical groups: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets). There are 13 languages: Dutch, French, German, Chinese, Japanese, Russian, Portuguese, Spanish, Latin American Spanish, Italian, Danish, Norwegian, and Swedish
{ "+1": 4, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/517/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/517/timeline
null
null
null
null
false
[ "Any updates on this?", "This request is still an open issue waiting to be addressed by any community member, @GuillemGSubies." ]