Alienmaster commited on
Commit
8fd1cea
1 Parent(s): d5cbe9d

First commit

Browse files
Files changed (2) hide show
  1. README.md +51 -0
  2. full.parquet +3 -0
README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - de
4
+ multilinguality:
5
+ - monolingual
6
+ license: cc-by-sa-4.0
7
+ size_categories:
8
+ - 100K<n<1M
9
+ task_categories:
10
+ - text-classification
11
+ pretty_name: Leipzig Corpora Wikipedia 2016 1 Million Sentences German
12
+ configs:
13
+ - config_name: default
14
+ data_files:
15
+ - split: full
16
+ path: "full.parquet"
17
+ ---
18
+ ## Leipzig Corpora Wikipedia 2016 1 Million Sentences German
19
+
20
+ This dataset contains one million sentences from the german wikipedia. The data were collected 2016.
21
+ Every element in the dataset is labeled as "neutral".
22
+
23
+ The source can be found [here](https://wortschatz.uni-leipzig.de/de/download/German)
24
+
25
+ ## Citation
26
+
27
+ ```
28
+ @inproceedings{goldhahn-etal-2012-building,
29
+ title = "Building Large Monolingual Dictionaries at the {L}eipzig Corpora Collection: From 100 to 200 Languages",
30
+ author = "Goldhahn, Dirk and
31
+ Eckart, Thomas and
32
+ Quasthoff, Uwe",
33
+ editor = "Calzolari, Nicoletta and
34
+ Choukri, Khalid and
35
+ Declerck, Thierry and
36
+ Do{\u{g}}an, Mehmet U{\u{g}}ur and
37
+ Maegaard, Bente and
38
+ Mariani, Joseph and
39
+ Moreno, Asuncion and
40
+ Odijk, Jan and
41
+ Piperidis, Stelios",
42
+ booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}'12)",
43
+ month = may,
44
+ year = "2012",
45
+ address = "Istanbul, Turkey",
46
+ publisher = "European Language Resources Association (ELRA)",
47
+ url = "http://www.lrec-conf.org/proceedings/lrec2012/pdf/327_Paper.pdf",
48
+ pages = "759--765",
49
+ abstract = "The Leipzig Corpora Collection offers free online access to 136 monolingual dictionaries enriched with statistical information. In this paper we describe current advances of the project in collecting and processing text data automatically for a large number of languages. Our main interest lies in languages of “low density”, where only few text data exists online. The aim of this approach is to create monolingual dictionaries and statistical information for a high number of new languages and to expand the existing dictionaries, opening up new possibilities for linguistic typology and other research. Focus of this paper will be set on the infrastructure for the automatic acquisition of large amounts of monolingual text in many languages from various sources. Preliminary results of the collection of text data will be presented. The mainly language-independent framework for preprocessing, cleaning and creating the corpora and computing the necessary statistics will also be depicted.",
50
+ }
51
+ ```
full.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31f513036270acda482f744d09ecc236e396c86f82eaf537d39899ad6d21adfd
3
+ size 83692241