indo_wiki / README.md
sabilmakbar's picture
Init commit
15a95e7
|
raw
history blame
6.5 kB
metadata
annotations_creators:
  - no-annotation
language_creators:
  - crowdsourced
language:
  - ace
  - ban
  - bjn
  - bug
  - gor
  - id
  - jv
  - mis
  - min
  - ms
  - nia
  - su
  - tet
license:
  - cc-by-sa-3.0
  - gfdl
multilinguality:
  - multilingual
source_datasets:
  - Wikipedia-HF
task_categories:
  - text-generation
  - fill-mask
task_ids:
  - language-modeling
  - masked-language-modeling
pretty_name: Wikipedia Archive for Indonesian Languages & Local Languages
tags:
  - Wikipedia
  - Untagged Languages ISO-639 (Banyumase/Ngapak)
  - Indonesian Language
  - Malaysian Language
  - Indonesia-related Languages
  - Indonesian Local Languages
dataset_info:
  - config_name: indowiki_all
    features:
      - name: url
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: ace
        num_bytes: 4875688
        num_examples: 12932
      - name: ban
        num_bytes: 17561379
        num_examples: 20243
      - name: bjn
        num_bytes: 6669628
        num_examples: 10460
      - name: bug
        num_bytes: 3297641
        num_examples: 15877
      - name: gor
        num_bytes: 6007726
        num_examples: 14572
      - name: id
        num_bytes: 1103106307
        num_examples: 657990
      - name: jv
        num_bytes: 70335030
        num_examples: 73150
      - name: map_bms
        num_bytes: 5215803
        num_examples: 13574
      - name: min
        num_bytes: 116481049
        num_examples: 227024
      - name: ms
        num_bytes: 416001194
        num_examples: 367463
      - name: nia
        num_bytes: 1938378
        num_examples: 1651
      - name: su
        num_bytes: 47489084
        num_examples: 61557
      - name: tet
        num_bytes: 1452716
        num_examples: 1465
    download_size: 1803193334
    dataset_size: 1800431623
  - config_name: indowiki_dedup_all
    features:
      - name: url
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: ace
        num_bytes: 4867838
        num_examples: 12904
      - name: ban
        num_bytes: 17366080
        num_examples: 19837
      - name: bjn
        num_bytes: 6655378
        num_examples: 10437
      - name: bug
        num_bytes: 2072609
        num_examples: 9793
      - name: gor
        num_bytes: 5989252
        num_examples: 14514
      - name: id
        num_bytes: 1100932403
        num_examples: 654287
      - name: jv
        num_bytes: 69774853
        num_examples: 72667
      - name: map_bms
        num_bytes: 5060989
        num_examples: 11832
      - name: min
        num_bytes: 116376870
        num_examples: 225858
      - name: ms
        num_bytes: 410443550
        num_examples: 346186
      - name: nia
        num_bytes: 1938121
        num_examples: 1650
      - name: su
        num_bytes: 47410439
        num_examples: 61494
      - name: tet
        num_bytes: 1447926
        num_examples: 1460
    download_size: 1793103024
    dataset_size: 1790336308
  - config_name: indowiki_dedup_id_only
    features:
      - name: url
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 1100932403
        num_examples: 654287
    download_size: 1103131493
    dataset_size: 1100932403

Indonesian Wikipedia Data Repository


license: cc-by-sa-3.0

Welcome to Indonesian Wikipedia Data Repository. The datasets are extracted from Wikipedia HF and processed using the scripts available in this repository for reproducibility purpose.

FAQS

How do I extract new Wikipedia Dataset of Indonesian languages?

You may check to the script extract_raw_wiki_data.py to understand its implementations, or you can adjust the bash provided in extract_raw_wiki_data_indo.sh to extract it on your own.

How do I extract new Wikipedia Dataset of Indonesian languages?

You may visit this Wikipedia Dump Index to check any latest available data and this link Wikipedia Language Coverage to map into any languages that you're wanting to extract.

How does the data being preprocessed? What makes it different from loading it directly from Wikipedia HF?

The data available in here are processed with following flows:

  1. Raw data is being deduplicated on title and text (text-content from a given article), to remove articles containing boilerplate text (template text that are used usually for no-available informations or asking for contributions of content in that article), which usually deemed noisy for NLP data.
  2. Furthermore, the title and text data are being checked for string-matching duplication (duplication of text that are being pre-processed, i.e symbols removed, HTML tags striped, or ASCII-chars validated). You may check this cleanse_wiki_data.py script to understand its implementation.

Getting Started

To read the datasets directly

Use one of the following code chunks to load it from HuggingFace Hub: You can refer to the 2nd args of config name using the following script

dataset = load_dataset(
  "sabilmakbar/indonesian_wiki",
  "indo_wiki_dedup_data" # a config name, can be "indo_wiki_raw_data" or "indowiki_dedup_id_only", defaults to "indo_wiki_dedup_data"
)

Or you can provide both lang and date_stamp (providing only one will thrown an error)

dataset = load_dataset(
  "sabilmakbar/indonesian_wiki",
  lang = "id", # see the splits for complete lang choices
  date_stamp="20230901"
)

To replicate the whole dataset generation process

  1. Set-up a new Python/Conda Environment (recommended Python version: 3.9.6 to 3.9.18 or 3.10.0 to 3.10.13) and install the requirements on requirements.txt use this codebase via pip install -r requirements.txt.
  2. Activate the chosen Python/Conda environment which the requirements are being installed.
  3. Run this sh script for extractions from Wikimedia Dump: sh extract_raw_wiki_data_indo.sh.
  4. Run this sh script of deduplication: sh dedup_raw_wiki_data_indo.sh.

Citation Info:

@ONLINE{wikidump,
    author = "Wikimedia Foundation",
    title  = "Wikimedia Downloads",
    url    = "https://dumps.wikimedia.org"}
@ONLINE{wikipedia-hf,
    title  = "Huggingface Wikipedia Dataset",
    url    = "https://huggingface.co/datasets/wikipedia"}