Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:

Casting Issue?

#40
by FelixLabelle - opened

While streaming FineWeb (see example below) I get a "casting" error. Is this an issue with the data format itself and is it something I can address myself?

if __name__ == "__main__":
    SEED = 0
    SAMPLE_BUFFER_SIZE=5_000
    RECORDS_TO_KEEP= 100_000
    TAKE_SIZE = 10_000_000 # 23,355,019,906 is max size

    fw = load_dataset("HuggingFaceFW/fineweb", split="train", streaming=True)
    fw = fw.shuffle(seed=SEED, buffer_size=SAMPLE_BUFFER_SIZE)
    clf = pickle.load(open('dataset_differentiator.pkl','rb'))
    priority_queue = PriorityQueue(RECORDS_TO_KEEP,key=lambda x: x['prob_control'])
    for sample in tqdm(fw.take(TAKE_SIZE)):
        # this is the domain prediction model, I can share more code if it seems relevant
        prediction = do_prediction_here(sample)
        priority_queue.add_record(prediction)
        
  
    json.dump(priority_queue.get_records(), open('sampled_features_100k.json', 'w'))

Here is the error trace

176147160it [69:33:58, 811.53it/s]Failed to read file 'hf://datasets/HuggingFaceFW/fineweb@29be36a2e035737f9b2d7e4f0847413ff7b2994b/data/CC-MAIN-2024-18/002_00009.parquet' with error <class 'd
atasets.table.CastError'>: Couldn't cast                                                                                                                                                        
text: string   
id: string                                                                                                                                                                             [32/1908]
dump: string                                                                                                                                                                                    
url: string                                                                                                                                                                                     
date: string                                                                                                                                                                                    
file_path: string                                                                                                                                                                               
language: string                                                                                                                                                                                
language_score: double                                                                                                                                                                          
filter_reason: string                                                                                                                                                                           
token_count: int64                                                                                                                                                                              
to                                                                                                                                                                                              
{'text': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'dump': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None), 'date': Value(dtype='string', id=N
one), 'file_path': Value(dtype='string', id=None), 'language': Value(dtype='string', id=None), 'language_score': Value(dtype='float64', id=None), 'token_count': Value(dtype='int64', id=None)} 
because column names don't match                                                                                                                                                                
176147212it [69:33:59, 703.35it/s]                                                                                                                                                              
Traceback (most recent call last):                                                                                                                                                              
  File "/home/felix_l_labelle/neoberta/dataset_filtering/fineweb_curation.py", line 88, in <module>                                                                                             
    for sample in tqdm(dl):                                                                                                                                                                     
  File "/home/felix_l_labelle/anaconda3/envs/data_processing/lib/python3.11/site-packages/tqdm/std.py", line 1181, in __iter__                                                                  
    for obj in iterable:                                                                                                                                                                        
  File "/home/felix_l_labelle/anaconda3/envs/data_processing/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 631, in __next__                                                
    data = self._next_data()                                                                                                                                                                    
           ^^^^^^^^^^^^^^^^^                                                                                                                                                                    
  File "/home/felix_l_labelle/anaconda3/envs/data_processing/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1346, in _next_data                                             
    return self._process_data(data)                                                                                                                                                             
           ^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                                             
  File "/home/felix_l_labelle/anaconda3/envs/data_processing/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1372, in _process_data                                          
    data.reraise()                                                                                                                                                                              
  File "/home/felix_l_labelle/anaconda3/envs/data_processing/lib/python3.11/site-packages/torch/_utils.py", line 704, in reraise                                                                
    raise RuntimeError(msg) from None                                                                                                                                                           
RuntimeError: Caught CastError in DataLoader worker process 12.                                                                                                                                 
Original Traceback (most recent call last):     
    data = fetcher.fetch(index)  # type: ignore[possibly-undefined]                                                                                                                     [0/1908]
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/felix_l_labelle/anaconda3/envs/data_processing/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 41, in fetch
    data = next(self.dataset_iter)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/felix_l_labelle/anaconda3/envs/data_processing/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1368, in __iter__
    yield from self._iter_pytorch()
  File "/home/felix_l_labelle/anaconda3/envs/data_processing/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1303, in _iter_pytorch
    for key, example in ex_iterable:
  File "/home/felix_l_labelle/anaconda3/envs/data_processing/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__
    yield from islice(self.ex_iterable, self.n)
  File "/home/felix_l_labelle/anaconda3/envs/data_processing/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 987, in __iter__
    for x in self.ex_iterable:
  File "/home/felix_l_labelle/anaconda3/envs/data_processing/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
    for key, pa_table in self.generate_tables_fn(**self.kwargs):
  File "/home/felix_l_labelle/anaconda3/envs/data_processing/lib/python3.11/site-packages/datasets/packaged_modules/parquet/parquet.py", line 97, in _generate_tables
    yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/felix_l_labelle/anaconda3/envs/data_processing/lib/python3.11/site-packages/datasets/packaged_modules/parquet/parquet.py", line 75, in _cast_table
    pa_table = table_cast(pa_table, self.info.features.arrow_schema)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/felix_l_labelle/anaconda3/envs/data_processing/lib/python3.11/site-packages/datasets/table.py", line 2295, in table_cast
    return cast_table_to_schema(table, schema)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/felix_l_labelle/anaconda3/envs/data_processing/lib/python3.11/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
    raise CastError(
datasets.table.CastError: Couldn't cast
text: string
id: string
dump: string
url: string
date: string
file_path: string
language: string
language_score: double
filter_reason: string
token_count: int64
to
{'text': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'dump': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None), 'date': Value(dtype='string', id=None), 'file_path': Value(dtype='string', id=None), 'language': Value(dtype='string', id=None), 'language_score': Value(dtype='float64', id=None), 'token_count': Value(dtype='int64', id=None)}
because column names don't match
HuggingFaceFW org

Seems to be an issue with an extra field on the latest dump, maybe try excluding it for now or downloading it separately. We will fix it soon

Perfect, thanks @guipenedo !

In the meantime for anyone else with this issue, here is what I used to exclude the latest dump. There might be a better way, but this works

configs = ['CC-MAIN-2021-43', 'CC-MAIN-2015-14', 'CC-MAIN-2019-30', 'CC-MAIN-2016-26', 'CC-MAIN-2020-24', 'CC-MAIN-2017-39', 'CC-MAIN-2019-22', 'CC-MAIN-2016-07', 'CC-MAIN-2022-33', 'CC-MAIN-2020-29', 'CC-MAIN-2015-32', 'CC-MAIN-2019-18', 'CC-MAIN-2021-10', 'CC-MAIN-2018-51', 'CC-MAIN-2020-05', 'CC-MAIN-2017-51', 'CC-MAIN-2021-25', 'CC-MAIN-2018-13', 'CC-MAIN-2019-26', 'CC-MAIN-2018-39', 'CC-MAIN-2019-43', 'CC-MAIN-2017-04', 'CC-MAIN-2023-23', 'CC-MAIN-2016-30', 'CC-MAIN-2019-47', 'CC-MAIN-2022-05', 'CC-MAIN-2017-26', 'CC-MAIN-2014-49', 'CC-MAIN-2023-50', 'CC-MAIN-2021-49', 'CC-MAIN-2019-09', 'CC-MAIN-2015-18', 'CC-MAIN-2018-43', 'CC-MAIN-2014-35', 'CC-MAIN-2023-40', 'CC-MAIN-2019-51', 'CC-MAIN-2018-09', 'CC-MAIN-2017-43', 'CC-MAIN-2019-13', 'CC-MAIN-2014-15', 'CC-MAIN-2018-34', 'CC-MAIN-2020-34', 'CC-MAIN-2020-40', 'CC-MAIN-2015-06', 'CC-MAIN-2018-47', 'CC-MAIN-2014-10', 'CC-MAIN-2014-41', 'CC-MAIN-2014-23', 'CC-MAIN-2020-50', 'CC-MAIN-2022-49', 'CC-MAIN-2018-17', 'CC-MAIN-2013-20', 'CC-MAIN-2018-05', 'CC-MAIN-2017-17', 'CC-MAIN-2016-18', 'CC-MAIN-2019-39', 'CC-MAIN-2017-34', 'CC-MAIN-2017-09', 'CC-MAIN-2023-14', 'CC-MAIN-2020-10', 'CC-MAIN-2016-50', 'CC-MAIN-2022-40', 'CC-MAIN-2015-35', 'CC-MAIN-2021-21', 'CC-MAIN-2015-22', 'CC-MAIN-2018-30', 'CC-MAIN-2015-48', 'CC-MAIN-2017-22', 'CC-MAIN-2017-30', 'CC-MAIN-2018-26', 'CC-MAIN-2020-16', 'CC-MAIN-2016-40', 'CC-MAIN-2022-21', 'CC-MAIN-2015-11', 'CC-MAIN-2018-22', 'CC-MAIN-2019-04', 'CC-MAIN-2016-36', 'CC-MAIN-2021-39', 'CC-MAIN-2014-52', 'CC-MAIN-2017-13', 'CC-MAIN-2017-47', 'CC-MAIN-2023-06', 'CC-MAIN-2022-27', 'CC-MAIN-2015-27', 'CC-MAIN-2014-42', 'CC-MAIN-2020-45', 'CC-MAIN-2016-44', 'CC-MAIN-2016-22', 'CC-MAIN-2021-17', 'CC-MAIN-2019-35', 'CC-MAIN-2021-31', 'CC-MAIN-2015-40', 'CC-MAIN-2013-48', 'CC-MAIN-2021-04']
fw = concatenate_datasets([load_dataset("HuggingFaceFW/fineweb",name=config, split="train", streaming=True) for config in configs])

It's not just streaming. If one creates a derivative dataset using datatrove over FineWeb's recent shards it saves it but then it fails to load it back

python -c "import sys; from datasets import load_dataset; ds=load_dataset('json', data_files='1.jsonl'); ds.save_to_disk("/data/stas/classify/in")
Loading dataset shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20/20 [00:00<00:00, 262.80it/s]
Saving the dataset (20/20 shards): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2247974/2247974 [00:12<00:00, 183273.18 examples/s]

python -c 'import sys; from datasets import load_dataset; ds=load_dataset("/data/stas/classify/in", split="train")'

Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 41/41 [00:00<00:00, 233333.06it/s]
Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 40/40 [00:00<00:00, 15250.63files/s]
Generating train split: 491162 examples [00:44, 9742.48 examples/s]Failed to read file '/data/stas/classify/in/train/cache-281a60427f972477_00000_of_00010.arrow' with error <class 'datasets.table.CastError'>: Couldn't cast
input_ids: list<item: int32>
  child 0, item: int32
attention_mask: list<item: int8>
  child 0, item: int8
-- schema metadata --
huggingface: '{"info": {"features": {"input_ids": {"feature": {"dtype": "' + 189
to
{'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'labels': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}
because column names don't match
Generating train split: 492235 examples [00:44, 11135.09 examples/s]
Traceback (most recent call last):
  File "/env/lib/conda/ctx-shared-vllm/lib/python3.10/site-packages/datasets/builder.py", line 1997, in _prepare_split_single
    for _, table in generator:
  File "/env/lib/conda/ctx-shared-vllm/lib/python3.10/site-packages/datasets/packaged_modules/arrow/arrow.py", line 67, in _generate_tables
    yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
  File "/env/lib/conda/ctx-shared-vllm/lib/python3.10/site-packages/datasets/packaged_modules/arrow/arrow.py", line 55, in _cast_table
    pa_table = table_cast(pa_table, self.info.features.arrow_schema)
  File "/env/lib/conda/ctx-shared-vllm/lib/python3.10/site-packages/datasets/table.py", line 2302, in table_cast
    return cast_table_to_schema(table, schema)
  File "/env/lib/conda/ctx-shared-vllm/lib/python3.10/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
    raise CastError(
datasets.table.CastError: Couldn't cast
input_ids: list<item: int32>
  child 0, item: int32
attention_mask: list<item: int8>
  child 0, item: int8
-- schema metadata --
huggingface: '{"info": {"features": {"input_ids": {"feature": {"dtype": "' + 189
to
{'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'labels': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}
because column names don't match

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/env/lib/conda/ctx-shared-vllm/lib/python3.10/site-packages/datasets/load.py", line 2616, in load_dataset
    builder_instance.download_and_prepare(
  File "/env/lib/conda/ctx-shared-vllm/lib/python3.10/site-packages/datasets/builder.py", line 1029, in download_and_prepare
    self._download_and_prepare(
  File "/env/lib/conda/ctx-shared-vllm/lib/python3.10/site-packages/datasets/builder.py", line 1124, in _download_and_prepare
    self._prepare_split(split_generator, **prepare_split_kwargs)
  File "/env/lib/conda/ctx-shared-vllm/lib/python3.10/site-packages/datasets/builder.py", line 1884, in _prepare_split
    for job_id, done, content in self._prepare_split_single(
  File "/env/lib/conda/ctx-shared-vllm/lib/python3.10/site-packages/datasets/builder.py", line 2040, in _prepare_split_single
    raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
HuggingFaceFW org
β€’
edited Jul 16

Removed that column from 2014-18 and reuploaded, should hopefully be fixed now. Let me know if there still are issues

Sign up or log in to comment