File size: 1,990 Bytes
22433f6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52bffef
 
 
 
 
 
 
22433f6
52bffef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22433f6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
---
dataset_info:
  features:
  - name: Claim
    dtype: string
  - name: Context
    dtype: string
  - name: Source
    dtype: string
  - name: Source Indices
    dtype: string
  - name: Relation
    dtype: string
  - name: Relation Indices
    dtype: string
  - name: Target
    dtype: string
  - name: Target Indices
    dtype: string
  - name: Inconsistent Claim Component
    dtype: string
  - name: Inconsistent Context-Span
    dtype: string
  - name: Inconsistent Context-Span Indices
    dtype: string
  - name: Inconsistency Type
    dtype: string
  - name: Fine-grained Inconsistent Entity-Type
    dtype: string
  - name: Coarse Inconsistent Entity-Type
    dtype: string
  splits:
  - name: train
    num_bytes: 2657091
    num_examples: 6443
  - name: validation
    num_bytes: 333142
    num_examples: 806
  - name: test
    num_bytes: 332484
    num_examples: 806
  download_size: 1784422
  dataset_size: 3322717
task_categories:
- token-classification
language:
- en
pretty_name: FICLE Dataset
size_categories:
- 1K<n<10K
---
# FICLE Dataset

The dataset can be loaded and utilized through the following:

```python
from datasets import load_dataset
ficle_data = load_dataset("tathagataraha/ficle")
```

# Dataset card for Falcon RefinedWeb

## Dataset Description

* **GitHub Repo:** 
* **Paper:** 
* **Point of Contact:** 

### Dataset Summary

### Languages

The FICLE Dataset contains only English.

## Dataset Structure

### Data Instances

### Data Fields

* `content`: 


### Data Splits

The dataset is split into `train`, `validation`, and `test`.
* `train`: 6.44k rows
* `validation`: 806 rows
* `test`: 806 rows

## Dataset Creation

### Curation Rationale

### Source Data

### Data Collection and Preprocessing

### Annotations

### Personal and Sensitive Information

## Considerations for Using the Data

### Social Impact of Dataset
 
### Discussion of Biases

### Other Known Limitations

## Additional Information

### Citation Information

### Contact