Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:

Individual files are too large to be used in some Parquet readers

#48
by Vectorrent - opened

I wanted to use this dataset in JavaScript, but found that individual Fineweb files were too large to be loaded into memory. This appears to be JavaScript/WASM issue, and sadly - one I won't be able to easily fix. However, a solution was offered by the maintainers of parquet-wasm (the library I'm using):

Yes, the hard cap on wasm32 memory is 4GiB, and it looks like that file is using a single gigantic 980k row group (decompressed, the first column of that row group is 3.7GB alone).

The general solution for something like this is to rework the original file, by cutting down the size of the row groups (e.g. a row group size of 100k should get 10 row groups, each ~370MB decompressed), as individual columns within row groups are the smallest unit used by parquet readers unless the file has a PageIndex (which this one doesn't) AND you're doing offset/limit reads.

Interestingly, when I pulled down that file via pyarrow and wrote it back to disk with the write_page_index flag on in pyarrow.parquet.write_table, reading the file in offset chunks (one stream per 100k rows) matched (in terms of performance) what I got from resized row groups. This would be a much less obtrusive change for the FineWeb maintainers, since the PageIndex lives in the file's footer (or an entirely separate sidecar file).

If anyone would be so-inclined to make such a change, it would certainly be appreciated by me! Until then, I just wanted to open this issue for posterity.

Sign up or log in to comment