File size: 2,556 Bytes
aa0dccf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6dfd407
aa0dccf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6dfd407
 
aa0dccf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6dfd407
aa0dccf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6dfd407
aa0dccf
 
 
6dfd407
 
aa0dccf
 
 
6dfd407
aa0dccf
 
6dfd407
 
 
 
 
 
 
 
 
 
 
aa0dccf
 
6dfd407
aa0dccf
6dfd407
aa0dccf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
---
annotations_creators: []
language: en
size_categories:
- 10K<n<100K
task_categories:
- image-classification
task_ids: []
pretty_name: EMNIST-Letters-10k
tags:
- fiftyone
- image
- image-classification
dataset_summary: '



  ![image/png](dataset_preview.png)



  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 10000 samples.


  ## Installation


  If you haven''t already, install FiftyOne:


  ```bash

  pip install -U fiftyone

  ```


  ## Usage


  ```python

  import fiftyone as fo

  from fiftyone.utils.huggingface import load_from_hub


  # Load the dataset

  # Note: other available arguments include ''max_samples'', etc

  dataset = load_from_hub("Voxel51/emnist-letters-tiny")


  # Launch the App

  session = fo.launch_app(dataset)

  ```

  '
---

# Dataset Card for EMNIST-Letters-10k

<!-- Provide a quick summary of the dataset. -->


A random subset of the train and test splits from the letters portion of [EMNIST](https://pytorch.org/vision/0.18/generated/torchvision.datasets.EMNIST.html)



![image/png](dataset_preview.png)


This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 10000 samples.

## Installation

If you haven't already, install FiftyOne:

```bash
pip install -U fiftyone
```

## Usage

```python
import fiftyone as fo
from fiftyone.utils.huggingface import load_from_hub

# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = load_from_hub("Voxel51/emnist-letters-tiny")

# Launch the App
session = fo.launch_app(dataset)
```


## Dataset Details

### Dataset Description

<!-- Provide a longer summary of what this dataset is. -->



- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]

### Dataset Sources

<!-- Provide the basic links for the dataset. -->

- **Homepage:** https://www.nist.gov/itl/products-and-services/emnist-dataset
- **Paper :** https://arxiv.org/abs/1702.05373v1



## Citation
**BibTeX:**

```bibtex
@misc{cohen2017emnistextensionmnisthandwritten,
      title={EMNIST: an extension of MNIST to handwritten letters}, 
      author={Gregory Cohen and Saeed Afshar and Jonathan Tapson and André van Schaik},
      year={2017},
      eprint={1702.05373},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/1702.05373}, 
}
```


## Dataset Card Author

[Jacob Marks](https://huggingface.co/jamarks)