moaminsharifi commited on
Commit
e7902d6
1 Parent(s): c843cc0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -1
README.md CHANGED
@@ -1,9 +1,28 @@
1
- # Wikipedia Corpus for Spell Checking Tasks
 
2
 
3
  ## Overview
4
 
5
  The Wikipedia Corpus is an open source dataset specifically designed for use in spell checking tasks. It is available on huggingface and can be accessed and utilized by anyone interested in improving spell checking algorithms.
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ## Purpose
8
 
9
  The primary objective of the Wikipedia Corpus is to serve as a comprehensive and reliable resource for training and evaluating spell checking models. By leveraging the vast amount of text data from Wikipedia, this dataset offers a diverse range of language patterns and real-world spelling errors. This allows researchers and developers to create more effective spell checking algorithms that can handle a wide variety of texts.
 
1
+ # Persian / Farsi Wikipedia Corpus for Spell Checking Tasks
2
+
3
 
4
  ## Overview
5
 
6
  The Wikipedia Corpus is an open source dataset specifically designed for use in spell checking tasks. It is available on huggingface and can be accessed and utilized by anyone interested in improving spell checking algorithms.
7
 
8
+ ### Formula
9
+
10
+ | chance of being | % |
11
+ |------------------|-------|
12
+ | normal sentences | >=98% |
13
+ | manipulation | <=2% |
14
+
15
+ *Please consider 2/100 * 1/100 which is 0.0002%*
16
+
17
+ | chance of manipulation for each function | % |
18
+ |------------------------------------------|---|
19
+ | `delete_word` function | 1% (0.0002% overall) |
20
+ | `delete_characters` function | 1% (0.0002% overall) |
21
+ | `insert_characters` function | 1% (0.0002% overall) |
22
+ | `replace_characters` function | 1% (0.0002% overall) |
23
+ | `swap_characters_case` function | 1% (0.0002% overall) |
24
+
25
+
26
  ## Purpose
27
 
28
  The primary objective of the Wikipedia Corpus is to serve as a comprehensive and reliable resource for training and evaluating spell checking models. By leveraging the vast amount of text data from Wikipedia, this dataset offers a diverse range of language patterns and real-world spelling errors. This allows researchers and developers to create more effective spell checking algorithms that can handle a wide variety of texts.