Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:

Reproducibility of the work for other languages

#38
by camillop - opened

Hey,

I would like to do a similar job and create a fineweb dataset for my native language (Italian), do you think this is something that is feasible with a jupyter notebook and a very very powerful machine behind it (like the biggest AWS or GCP have to offer) or is this necessairly big data stuff where you need to set up an entire infrastructure to handle it? I would be fine with having a 500B/1T token dataset as a result

HuggingFaceFW org

It's hard to say, as a starting point you would likely have to go through the entirety of commoncrawl to be able to find the italian samples, so if you want to process all dumps I would say so. If you only process 1 or 2 it could be feasible

thanks, I will try as soon as I have some spare time and update here once done!

Sign up or log in to comment