spellchecker / README.md
inkoziev's picture
Update README.md
d406bb1 verified
|
raw
history blame
No virus
5.7 kB
metadata
license: mit
language_bcp47:
  - ru-RU
tags:
  - spellchecking
language:
  - ru
size_categories:
  - 100K<n<1M
task_categories:
  - text2text-generation

Dataset Summary

This dataset is a set of samples for testing the spell checker, grammar error correction and ungrammatical text detection models.

The dataset contains two splits:

test.json contains samples hand-selected to evaluate the quality of models.

train.json contains synthetic samples generated in various ways.

The purpose of creating the dataset was to test an internal spellchecker for a generative poetry project, but it can also be useful in other projects, since it does not have an explicit specialization for poetry. You can consider this dataset as an extension of RuCOLA. In addition, for some samples there is a corrected version of the text ("fixed_sentence" field), so it can be used as an extension of datasets in ai-forever/spellcheck_benchmark.

Example

{
        "id": 1483,
        "sentence": "Разучи стихов по больше",
        "fixed_sentence": "Разучи стихов побольше",
        "label": 0,
        "error_type": "Tokenization",
        "domain": "prose"
}

Notes

The test split contains only examples of mistakes made by people. There are no synthetics among these mistakes.

The examples of errors in the test split come from different people in terms of gender, age, education, context, and social context.

The input and output text can be not only one sentence, but also 1) part of a sentence, 2) several sentences - a paragraph, 3) a fragment of a poem, usually a quatrain or two quatrains.

The texts may include offensive texts, texts that offend religious or political feelings, texts that contradict moral standards, etc. Such samples are only needed to make the corpus as representative as possible for the tasks of processing messages in various media such as blogs, comments, etc.

One sample may contain several errors of different types.

Uncensoring samples

A number of samples contain text with explicit obscenities:

{
        "id": 1,
        "sentence": "Но не простого - с лёгкой еб@нцой.",
        "fixed_sentence": "Но не простого - с лёгкой ебанцой.",
        "label": 0,
        "error_type": "Misspelling",
        "domain": "prose"
}

Poetry samples

A few poetry samples are included in this version:

{
        "id": 24,
        "sentence": "Чему научит забытьё?\nСмерть формы д'арует литьё.\nРезец мгновенье любит стружка...\nСмерть безобидная подружка!",
        "fixed_sentence": null,
        "label": 0,
        "error_type": "Grammar",
        "domain": "poetry"
}

Dataset fields

id (int64): the sentence's id, starting 1.
sentence (str): the original sentence.
fixed_sentence (str): the corrected version of original sentence.
label (str): the target class. "1" for "acceptable", "0" for "unacceptable".
error_type (str): the violation category: Spelling, Grammar, Tokenization, Punctuation, Mixture, Unknown.
domain (str): domain: "prose" or "poetry".

Error types

Tokenization: a word is split into two tokens, or two words are merged into one word.

{
        "id": 6,
        "sentence": "Я подбираю по проще слова",
        "fixed_sentence": "Я подбираю попроще слова",
        "label": 0,
        "error_type": "Tokenization",
        "domain": "prose"
}

Punctuation: missing or extra comma, hyphen or other punctuation mark

{
        "id": 5,
        "sentence": "И швырнуть по-дальше",
        "fixed_sentence": "И швырнуть подальше",
        "label": 0,
        "error_type": "Punctuation",
        "domain": "prose"
}

Spelling:

{
        "id": 38,
        "sentence": "И ведь что интересно, русские официально ни в одном крестовом позоде не участвовали.",
        "fixed_sentence": "И ведь что интересно, русские официально ни в одном крестовом походе не участвовали.",
        "label": 0,
        "error_type": "Spelling",
        "domain": "prose"
}

Grammar: One of the words is in the wrong grammatical form, for example the verb is in the infinitive instead of the personal form.

{
        "id": 61,
        "sentence": "на него никто не польститься",
        "fixed_sentence": "на него никто не польстится",
        "label": 0,
        "error_type": "Grammar",
        "domain": "prose"
}

Please note that error categories are not always set accurately, so you should not use the "error_type" field to train classifiers.

Statistics

Statistics for test split.

+--------+---------+---------+-------------+--------------+----------+---------+-------+
| Domain | Grammar | Unknown | Punctuation | Tokenization | Spelling | Mixture | TOTAL |
+--------+---------+---------+-------------+--------------+----------+---------+-------+
| prose  | 185     | 636     | 1407        | 1999         | 1802     | 150     | 6179  |
| poetry | 1       | 614     | 222         | 172          | 27       | 30      | 1066  |
+--------+---------+---------+-------------+--------------+----------+---------+-------+