--- license: apache-2.0 dataset_info: - config_name: full features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 392478490.025 num_examples: 76319 - name: validation num_bytes: 43364061.55 num_examples: 8475 - name: test num_bytes: 47643036.303 num_examples: 9443 download_size: 473618552 dataset_size: 483485587.878 - config_name: human_handwrite features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 16181778 num_examples: 1200 - name: validation num_bytes: 962283 num_examples: 68 - name: test num_bytes: 906906 num_examples: 70 download_size: 18056029 dataset_size: 18050967 - config_name: human_handwrite_print features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 3152122.8 num_examples: 1200 - name: validation num_bytes: 182615 num_examples: 68 - name: test num_bytes: 181698 num_examples: 70 download_size: 1336052 dataset_size: 3516435.8 - config_name: small features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 261296 num_examples: 50 - name: validation num_bytes: 156489 num_examples: 30 - name: test num_bytes: 156489 num_examples: 30 download_size: 588907 dataset_size: 574274 - config_name: synthetic_handwrite features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 496610333.066 num_examples: 76266 - name: validation num_bytes: 63147351.515 num_examples: 9565 - name: test num_bytes: 62893132.805 num_examples: 9593 download_size: 616418996 dataset_size: 622650817.3859999 configs: - config_name: full data_files: - split: train path: full/train-* - split: validation path: full/validation-* - split: test path: full/test-* - config_name: human_handwrite data_files: - split: train path: human_handwrite/train-* - split: validation path: human_handwrite/validation-* - split: test path: human_handwrite/test-* - config_name: human_handwrite_print data_files: - split: train path: human_handwrite_print/train-* - split: validation path: human_handwrite_print/validation-* - split: test path: human_handwrite_print/test-* - config_name: small data_files: - split: train path: small/train-* - split: validation path: small/validation-* - split: test path: small/test-* - config_name: synthetic_handwrite data_files: - split: train path: synthetic_handwrite/train-* - split: validation path: synthetic_handwrite/validation-* - split: test path: synthetic_handwrite/test-* task_categories: - image-to-text tags: - code size_categories: - 100K 原始数据仓库在github [LinXueyuanStdio/Data-for-LaTeX_OCR](https://github.com/LinXueyuanStdio/Data-for-LaTeX_OCR). ## 数据集 本仓库有 5 个数据集 1. `small` 是小数据集,样本数 110 条,用于测试 2. `full` 是印刷体约 100k 的完整数据集。实际上样本数略小于 100k,因为用 LaTeX 的抽象语法树剔除了很多不能渲染的 LaTeX。 3. `synthetic_handwrite` 是手写体 100k 的完整数据集,基于 `full` 的公式,使用手写字体合成而来,可以视为人类在纸上的手写体。样本数实际上略小于 100k,理由同上。 4. `human_handwrite` 是手写体较小数据集,更符合人类在电子屏上的手写体。主要来源于 `CROHME`。我们用 LaTeX 的抽象语法树校验过了。 5. `human_handwrite_print` 是来自 `human_handwrite` 的印刷体数据集,公式部分和 `human_handwrite` 相同,图片部分由公式用 LaTeX 渲染而来。 ## 使用 加载训练集 - name 可选 small, full, synthetic_handwrite, human_handwrite, human_handwrite_print - split 可选 train, validation, test ```python >>> from datasets import load_dataset >>> train_dataset = load_dataset("linxy/LaTeX_OCR", name="small", split="train") >>> train_dataset[2]["text"] \rho _ { L } ( q ) = \sum _ { m = 1 } ^ { L } \ P _ { L } ( m ) \ { \frac { 1 } { q ^ { m - 1 } } } . >>> train_dataset[2] {'image': , 'text': '\\rho _ { L } ( q ) = \\sum _ { m = 1 } ^ { L } \\ P _ { L } ( m ) \\ { \\frac { 1 } { q ^ { m - 1 } } } .'} >>> len(train_dataset) 50 ``` 加载所有 ```python >>> from datasets import load_dataset >>> dataset = load_dataset("linxy/LaTeX_OCR", name="small") >>> dataset DatasetDict({ train: Dataset({ features: ['image', 'text'], num_rows: 50 }) validation: Dataset({ features: ['image', 'text'], num_rows: 30 }) test: Dataset({ features: ['image', 'text'], num_rows: 30 }) }) ```