modelId
stringlengths
4
112
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
21 values
files
sequence
publishedBy
stringlengths
2
37
downloads_last_month
int32
0
9.44M
library
stringclasses
15 values
modelCard
large_stringlengths
0
100k
Helsinki-NLP/opus-mt-en-afa
2021-01-18T08:04:43.000Z
[ "pytorch", "marian", "seq2seq", "en", "so", "ti", "am", "he", "mt", "ar", "afa", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
25
transformers
--- language: - en - so - ti - am - he - mt - ar - afa tags: - translation license: apache-2.0 --- ### eng-afa * source group: English * target group: Afro-Asiatic languages * OPUS readme: [eng-afa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-afa/README.md) * model: transformer * source language(s): eng * target language(s): acm afb amh apc ara arq ary arz hau_Latn heb kab mlt rif_Latn shy_Latn som tir * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-amh.eng.amh | 11.6 | 0.504 | | Tatoeba-test.eng-ara.eng.ara | 12.0 | 0.404 | | Tatoeba-test.eng-hau.eng.hau | 10.2 | 0.429 | | Tatoeba-test.eng-heb.eng.heb | 32.3 | 0.551 | | Tatoeba-test.eng-kab.eng.kab | 1.6 | 0.191 | | Tatoeba-test.eng-mlt.eng.mlt | 17.7 | 0.551 | | Tatoeba-test.eng.multi | 14.4 | 0.375 | | Tatoeba-test.eng-rif.eng.rif | 1.7 | 0.103 | | Tatoeba-test.eng-shy.eng.shy | 0.8 | 0.090 | | Tatoeba-test.eng-som.eng.som | 16.0 | 0.429 | | Tatoeba-test.eng-tir.eng.tir | 2.7 | 0.238 | ### System Info: - hf_name: eng-afa - source_languages: eng - target_languages: afa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-afa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'so', 'ti', 'am', 'he', 'mt', 'ar', 'afa'] - src_constituents: {'eng'} - tgt_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: afa - short_pair: en-afa - chrF2_score: 0.375 - bleu: 14.4 - brevity_penalty: 1.0 - ref_len: 58110.0 - src_name: English - tgt_name: Afro-Asiatic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: afa - prefer_old: False - long_pair: eng-afa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-alv
2021-01-18T08:04:49.000Z
[ "pytorch", "marian", "seq2seq", "en", "sn", "rw", "wo", "ig", "sg", "ee", "zu", "lg", "ts", "ln", "ny", "yo", "rn", "xh", "alv", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
58
transformers
--- language: - en - sn - rw - wo - ig - sg - ee - zu - lg - ts - ln - ny - yo - rn - xh - alv tags: - translation license: apache-2.0 --- ### eng-alv * source group: English * target group: Atlantic-Congo languages * OPUS readme: [eng-alv](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-alv/README.md) * model: transformer * source language(s): eng * target language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-ewe.eng.ewe | 4.9 | 0.212 | | Tatoeba-test.eng-ful.eng.ful | 0.6 | 0.079 | | Tatoeba-test.eng-ibo.eng.ibo | 3.5 | 0.255 | | Tatoeba-test.eng-kin.eng.kin | 10.5 | 0.510 | | Tatoeba-test.eng-lin.eng.lin | 1.1 | 0.273 | | Tatoeba-test.eng-lug.eng.lug | 5.3 | 0.340 | | Tatoeba-test.eng.multi | 11.4 | 0.429 | | Tatoeba-test.eng-nya.eng.nya | 18.1 | 0.595 | | Tatoeba-test.eng-run.eng.run | 13.9 | 0.484 | | Tatoeba-test.eng-sag.eng.sag | 5.3 | 0.194 | | Tatoeba-test.eng-sna.eng.sna | 26.2 | 0.623 | | Tatoeba-test.eng-swa.eng.swa | 1.0 | 0.141 | | Tatoeba-test.eng-toi.eng.toi | 7.0 | 0.224 | | Tatoeba-test.eng-tso.eng.tso | 46.7 | 0.643 | | Tatoeba-test.eng-umb.eng.umb | 7.8 | 0.359 | | Tatoeba-test.eng-wol.eng.wol | 6.8 | 0.191 | | Tatoeba-test.eng-xho.eng.xho | 27.1 | 0.629 | | Tatoeba-test.eng-yor.eng.yor | 17.4 | 0.356 | | Tatoeba-test.eng-zul.eng.zul | 34.1 | 0.729 | ### System Info: - hf_name: eng-alv - source_languages: eng - target_languages: alv - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-alv/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv'] - src_constituents: {'eng'} - tgt_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: alv - short_pair: en-alv - chrF2_score: 0.429 - bleu: 11.4 - brevity_penalty: 1.0 - ref_len: 10603.0 - src_name: English - tgt_name: Atlantic-Congo languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: alv - prefer_old: False - long_pair: eng-alv - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-ar
2021-02-28T14:15:11.000Z
[ "pytorch", "rust", "marian", "seq2seq", "en", "ar", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "rust_model.ot", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
1,597
transformers
--- language: - en - ar tags: - translation license: apache-2.0 --- ### eng-ara * source group: English * target group: Arabic * OPUS readme: [eng-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ara/README.md) * model: transformer * source language(s): eng * target language(s): acm afb apc apc_Latn ara ara_Latn arq arq_Latn ary arz * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.ara | 14.0 | 0.437 | ### System Info: - hf_name: eng-ara - source_languages: eng - target_languages: ara - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ara/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'ar'] - src_constituents: {'eng'} - tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.test.txt - src_alpha3: eng - tgt_alpha3: ara - short_pair: en-ar - chrF2_score: 0.43700000000000006 - bleu: 14.0 - brevity_penalty: 1.0 - ref_len: 58935.0 - src_name: English - tgt_name: Arabic - train_date: 2020-07-03 - src_alpha2: en - tgt_alpha2: ar - prefer_old: False - long_pair: eng-ara - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-az
2021-01-18T08:05:00.000Z
[ "pytorch", "marian", "seq2seq", "en", "az", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
328
transformers
--- language: - en - az tags: - translation license: apache-2.0 --- ### eng-aze * source group: English * target group: Azerbaijani * OPUS readme: [eng-aze](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-aze/README.md) * model: transformer-align * source language(s): eng * target language(s): aze_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.aze | 18.6 | 0.477 | ### System Info: - hf_name: eng-aze - source_languages: eng - target_languages: aze - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-aze/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'az'] - src_constituents: {'eng'} - tgt_constituents: {'aze_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.test.txt - src_alpha3: eng - tgt_alpha3: aze - short_pair: en-az - chrF2_score: 0.47700000000000004 - bleu: 18.6 - brevity_penalty: 1.0 - ref_len: 13012.0 - src_name: English - tgt_name: Azerbaijani - train_date: 2020-06-16 - src_alpha2: en - tgt_alpha2: az - prefer_old: False - long_pair: eng-aze - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-bat
2021-01-18T08:05:05.000Z
[ "pytorch", "marian", "seq2seq", "en", "lt", "lv", "bat", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
51
transformers
--- language: - en - lt - lv - bat tags: - translation license: apache-2.0 --- ### eng-bat * source group: English * target group: Baltic languages * OPUS readme: [eng-bat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bat/README.md) * model: transformer * source language(s): eng * target language(s): lav lit ltg prg_Latn sgs * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2017-enlv-englav.eng.lav | 24.0 | 0.546 | | newsdev2019-enlt-englit.eng.lit | 20.9 | 0.533 | | newstest2017-enlv-englav.eng.lav | 18.3 | 0.506 | | newstest2019-enlt-englit.eng.lit | 13.6 | 0.466 | | Tatoeba-test.eng-lav.eng.lav | 42.8 | 0.652 | | Tatoeba-test.eng-lit.eng.lit | 37.1 | 0.650 | | Tatoeba-test.eng.multi | 37.0 | 0.616 | | Tatoeba-test.eng-prg.eng.prg | 0.5 | 0.130 | | Tatoeba-test.eng-sgs.eng.sgs | 4.1 | 0.178 | ### System Info: - hf_name: eng-bat - source_languages: eng - target_languages: bat - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bat/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'lt', 'lv', 'bat'] - src_constituents: {'eng'} - tgt_constituents: {'lit', 'lav', 'prg_Latn', 'ltg', 'sgs'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: bat - short_pair: en-bat - chrF2_score: 0.616 - bleu: 37.0 - brevity_penalty: 0.956 - ref_len: 26417.0 - src_name: English - tgt_name: Baltic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: bat - prefer_old: False - long_pair: eng-bat - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-bcl
2021-01-18T08:05:10.000Z
[ "pytorch", "marian", "seq2seq", "en", "bcl", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
82
transformers
--- tags: - translation --- ### opus-mt-en-bcl * source languages: en * target languages: bcl * OPUS readme: [en-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bcl/README.md) * dataset: opus+bt * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus+bt-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.zip) * test set translations: [opus+bt-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.test.txt) * test set scores: [opus+bt-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.bcl | 54.3 | 0.722 |
Helsinki-NLP/opus-mt-en-bem
2021-01-18T08:05:16.000Z
[ "pytorch", "marian", "seq2seq", "en", "bem", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
63
transformers
--- tags: - translation --- ### opus-mt-en-bem * source languages: en * target languages: bem * OPUS readme: [en-bem](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bem/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bem/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bem/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bem/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.bem | 29.7 | 0.532 |
Helsinki-NLP/opus-mt-en-ber
2021-01-18T08:05:20.000Z
[ "pytorch", "marian", "seq2seq", "en", "ber", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
110
transformers
--- tags: - translation --- ### opus-mt-en-ber * source languages: en * target languages: ber * OPUS readme: [en-ber](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ber/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ber/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ber/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ber/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.ber | 29.7 | 0.544 |
Helsinki-NLP/opus-mt-en-bg
2021-01-18T08:05:27.000Z
[ "pytorch", "marian", "seq2seq", "en", "bg", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
627
transformers
--- language: - en - bg tags: - translation license: apache-2.0 --- ### eng-bul * source group: English * target group: Bulgarian * OPUS readme: [eng-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bul/README.md) * model: transformer * source language(s): eng * target language(s): bul bul_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.bul | 50.6 | 0.680 | ### System Info: - hf_name: eng-bul - source_languages: eng - target_languages: bul - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bul/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'bg'] - src_constituents: {'eng'} - tgt_constituents: {'bul', 'bul_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.test.txt - src_alpha3: eng - tgt_alpha3: bul - short_pair: en-bg - chrF2_score: 0.68 - bleu: 50.6 - brevity_penalty: 0.96 - ref_len: 69504.0 - src_name: English - tgt_name: Bulgarian - train_date: 2020-07-03 - src_alpha2: en - tgt_alpha2: bg - prefer_old: False - long_pair: eng-bul - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-bi
2021-01-18T08:05:32.000Z
[ "pytorch", "marian", "seq2seq", "en", "bi", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
108
transformers
--- tags: - translation --- ### opus-mt-en-bi * source languages: en * target languages: bi * OPUS readme: [en-bi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bi/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bi/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bi/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.bi | 36.4 | 0.543 |
Helsinki-NLP/opus-mt-en-bnt
2021-01-18T08:05:36.000Z
[ "pytorch", "marian", "seq2seq", "en", "sn", "zu", "rw", "lg", "ts", "ln", "ny", "xh", "rn", "bnt", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
50
transformers
--- language: - en - sn - zu - rw - lg - ts - ln - ny - xh - rn - bnt tags: - translation license: apache-2.0 --- ### eng-bnt * source group: English * target group: Bantu languages * OPUS readme: [eng-bnt](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bnt/README.md) * model: transformer * source language(s): eng * target language(s): kin lin lug nya run sna swh toi_Latn tso umb xho zul * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.zip) * test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.test.txt) * test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-kin.eng.kin | 12.5 | 0.519 | | Tatoeba-test.eng-lin.eng.lin | 1.1 | 0.277 | | Tatoeba-test.eng-lug.eng.lug | 4.8 | 0.415 | | Tatoeba-test.eng.multi | 12.1 | 0.449 | | Tatoeba-test.eng-nya.eng.nya | 22.1 | 0.616 | | Tatoeba-test.eng-run.eng.run | 13.2 | 0.492 | | Tatoeba-test.eng-sna.eng.sna | 32.1 | 0.669 | | Tatoeba-test.eng-swa.eng.swa | 1.7 | 0.180 | | Tatoeba-test.eng-toi.eng.toi | 10.7 | 0.266 | | Tatoeba-test.eng-tso.eng.tso | 26.9 | 0.631 | | Tatoeba-test.eng-umb.eng.umb | 5.2 | 0.295 | | Tatoeba-test.eng-xho.eng.xho | 22.6 | 0.615 | | Tatoeba-test.eng-zul.eng.zul | 41.1 | 0.769 | ### System Info: - hf_name: eng-bnt - source_languages: eng - target_languages: bnt - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bnt/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'sn', 'zu', 'rw', 'lg', 'ts', 'ln', 'ny', 'xh', 'rn', 'bnt'] - src_constituents: {'eng'} - tgt_constituents: {'sna', 'zul', 'kin', 'lug', 'tso', 'lin', 'nya', 'xho', 'swh', 'run', 'toi_Latn', 'umb'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.test.txt - src_alpha3: eng - tgt_alpha3: bnt - short_pair: en-bnt - chrF2_score: 0.449 - bleu: 12.1 - brevity_penalty: 1.0 - ref_len: 9989.0 - src_name: English - tgt_name: Bantu languages - train_date: 2020-07-26 - src_alpha2: en - tgt_alpha2: bnt - prefer_old: False - long_pair: eng-bnt - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-bzs
2021-01-18T08:05:42.000Z
[ "pytorch", "marian", "seq2seq", "en", "bzs", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
49
transformers
--- tags: - translation --- ### opus-mt-en-bzs * source languages: en * target languages: bzs * OPUS readme: [en-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bzs/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bzs/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bzs/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bzs/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.bzs | 43.4 | 0.612 |
Helsinki-NLP/opus-mt-en-ca
2021-01-18T08:05:47.000Z
[ "pytorch", "marian", "seq2seq", "en", "ca", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
401
transformers
--- tags: - translation --- ### opus-mt-en-ca * source languages: en * target languages: ca * OPUS readme: [en-ca](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ca/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.ca | 47.2 | 0.665 |
Helsinki-NLP/opus-mt-en-ceb
2021-01-18T08:05:53.000Z
[ "pytorch", "marian", "seq2seq", "en", "ceb", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
76
transformers
--- tags: - translation --- ### opus-mt-en-ceb * source languages: en * target languages: ceb * OPUS readme: [en-ceb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ceb/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ceb/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ceb/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ceb/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ceb | 51.3 | 0.704 | | Tatoeba.en.ceb | 31.3 | 0.600 |
Helsinki-NLP/opus-mt-en-cel
2021-01-18T08:05:58.000Z
[ "pytorch", "marian", "seq2seq", "en", "gd", "ga", "br", "kw", "gv", "cy", "cel", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
60
transformers
--- language: - en - gd - ga - br - kw - gv - cy - cel tags: - translation license: apache-2.0 --- ### eng-cel * source group: English * target group: Celtic languages * OPUS readme: [eng-cel](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cel/README.md) * model: transformer * source language(s): eng * target language(s): bre cor cym gla gle glv * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-bre.eng.bre | 11.5 | 0.338 | | Tatoeba-test.eng-cor.eng.cor | 0.3 | 0.095 | | Tatoeba-test.eng-cym.eng.cym | 31.0 | 0.549 | | Tatoeba-test.eng-gla.eng.gla | 7.6 | 0.317 | | Tatoeba-test.eng-gle.eng.gle | 35.9 | 0.582 | | Tatoeba-test.eng-glv.eng.glv | 9.9 | 0.454 | | Tatoeba-test.eng.multi | 18.0 | 0.342 | ### System Info: - hf_name: eng-cel - source_languages: eng - target_languages: cel - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cel/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'gd', 'ga', 'br', 'kw', 'gv', 'cy', 'cel'] - src_constituents: {'eng'} - tgt_constituents: {'gla', 'gle', 'bre', 'cor', 'glv', 'cym'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: cel - short_pair: en-cel - chrF2_score: 0.342 - bleu: 18.0 - brevity_penalty: 0.9590000000000001 - ref_len: 45370.0 - src_name: English - tgt_name: Celtic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: cel - prefer_old: False - long_pair: eng-cel - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-chk
2021-01-18T08:06:04.000Z
[ "pytorch", "marian", "seq2seq", "en", "chk", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
70
transformers
--- tags: - translation --- ### opus-mt-en-chk * source languages: en * target languages: chk * OPUS readme: [en-chk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-chk/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.chk | 26.1 | 0.468 |
Helsinki-NLP/opus-mt-en-cpf
2021-01-18T08:06:10.000Z
[ "pytorch", "marian", "seq2seq", "en", "ht", "cpf", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
57
transformers
--- language: - en - ht - cpf tags: - translation license: apache-2.0 --- ### eng-cpf * source group: English * target group: Creoles and pidgins, French‑based * OPUS readme: [eng-cpf](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cpf/README.md) * model: transformer * source language(s): eng * target language(s): gcf_Latn hat mfe * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.zip) * test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.test.txt) * test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-gcf.eng.gcf | 6.2 | 0.262 | | Tatoeba-test.eng-hat.eng.hat | 25.7 | 0.451 | | Tatoeba-test.eng-mfe.eng.mfe | 80.1 | 0.900 | | Tatoeba-test.eng.multi | 15.9 | 0.354 | ### System Info: - hf_name: eng-cpf - source_languages: eng - target_languages: cpf - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cpf/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'ht', 'cpf'] - src_constituents: {'eng'} - tgt_constituents: {'gcf_Latn', 'hat', 'mfe'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.test.txt - src_alpha3: eng - tgt_alpha3: cpf - short_pair: en-cpf - chrF2_score: 0.354 - bleu: 15.9 - brevity_penalty: 1.0 - ref_len: 1012.0 - src_name: English - tgt_name: Creoles and pidgins, French‑based - train_date: 2020-07-26 - src_alpha2: en - tgt_alpha2: cpf - prefer_old: False - long_pair: eng-cpf - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-cpp
2021-01-18T08:06:15.000Z
[ "pytorch", "marian", "seq2seq", "en", "id", "cpp", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
41
transformers
--- language: - en - id - cpp tags: - translation license: apache-2.0 --- ### eng-cpp * source group: English * target group: Creoles and pidgins, Portuguese-based * OPUS readme: [eng-cpp](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cpp/README.md) * model: transformer * source language(s): eng * target language(s): ind max_Latn min pap tmw_Latn zlm_Latn zsm_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-msa.eng.msa | 32.6 | 0.573 | | Tatoeba-test.eng.multi | 32.7 | 0.574 | | Tatoeba-test.eng-pap.eng.pap | 42.5 | 0.633 | ### System Info: - hf_name: eng-cpp - source_languages: eng - target_languages: cpp - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cpp/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'id', 'cpp'] - src_constituents: {'eng'} - tgt_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: cpp - short_pair: en-cpp - chrF2_score: 0.574 - bleu: 32.7 - brevity_penalty: 0.996 - ref_len: 34010.0 - src_name: English - tgt_name: Creoles and pidgins, Portuguese-based - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: cpp - prefer_old: False - long_pair: eng-cpp - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-crs
2021-01-18T08:06:19.000Z
[ "pytorch", "marian", "seq2seq", "en", "crs", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
69
transformers
--- tags: - translation --- ### opus-mt-en-crs * source languages: en * target languages: crs * OPUS readme: [en-crs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-crs/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-crs/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-crs/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-crs/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.crs | 45.2 | 0.617 |
Helsinki-NLP/opus-mt-en-cs
2021-01-18T08:06:24.000Z
[ "pytorch", "marian", "seq2seq", "en", "cs", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
494
transformers
--- tags: - translation --- ### opus-mt-en-cs * source languages: en * target languages: cs * OPUS readme: [en-cs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-cs/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009.en.cs | 22.8 | 0.507 | | news-test2008.en.cs | 20.7 | 0.485 | | newstest2009.en.cs | 21.8 | 0.500 | | newstest2010.en.cs | 22.1 | 0.505 | | newstest2011.en.cs | 23.2 | 0.507 | | newstest2012.en.cs | 20.8 | 0.482 | | newstest2013.en.cs | 24.7 | 0.514 | | newstest2015-encs.en.cs | 24.9 | 0.527 | | newstest2016-encs.en.cs | 26.7 | 0.540 | | newstest2017-encs.en.cs | 22.7 | 0.503 | | newstest2018-encs.en.cs | 22.9 | 0.504 | | newstest2019-encs.en.cs | 24.9 | 0.518 | | Tatoeba.en.cs | 46.1 | 0.647 |
Helsinki-NLP/opus-mt-en-cus
2021-01-18T08:06:29.000Z
[ "pytorch", "marian", "seq2seq", "en", "so", "cus", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
52
transformers
--- language: - en - so - cus tags: - translation license: apache-2.0 --- ### eng-cus * source group: English * target group: Cushitic languages * OPUS readme: [eng-cus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cus/README.md) * model: transformer * source language(s): eng * target language(s): som * model: transformer * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.multi | 16.0 | 0.173 | | Tatoeba-test.eng-som.eng.som | 16.0 | 0.173 | ### System Info: - hf_name: eng-cus - source_languages: eng - target_languages: cus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'so', 'cus'] - src_constituents: {'eng'} - tgt_constituents: {'som'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: cus - short_pair: en-cus - chrF2_score: 0.17300000000000001 - bleu: 16.0 - brevity_penalty: 1.0 - ref_len: 3.0 - src_name: English - tgt_name: Cushitic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: cus - prefer_old: False - long_pair: eng-cus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-cy
2021-01-18T08:06:33.000Z
[ "pytorch", "marian", "seq2seq", "en", "cy", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
103
transformers
--- tags: - translation --- ### opus-mt-en-cy * source languages: en * target languages: cy * OPUS readme: [en-cy](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-cy/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-cy/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cy/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cy/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.cy | 25.3 | 0.487 |
Helsinki-NLP/opus-mt-en-da
2021-01-18T08:06:38.000Z
[ "pytorch", "marian", "seq2seq", "en", "da", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
411
transformers
--- tags: - translation --- ### opus-mt-en-da * source languages: en * target languages: da * OPUS readme: [en-da](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-da/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-da/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-da/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-da/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.da | 60.4 | 0.745 |
Helsinki-NLP/opus-mt-en-de
2021-02-24T08:30:29.000Z
[ "pytorch", "tf", "rust", "marian", "seq2seq", "en", "de", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "rust_model.ot", "source.spm", "target.spm", "tf_model.h5", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
365,247
transformers
--- tags: - translation --- ### opus-mt-en-de * source languages: en * target languages: de * OPUS readme: [en-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.test.txt) * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009.en.de | 23.5 | 0.540 | | news-test2008.en.de | 23.5 | 0.529 | | newstest2009.en.de | 22.3 | 0.530 | | newstest2010.en.de | 24.9 | 0.544 | | newstest2011.en.de | 22.5 | 0.524 | | newstest2012.en.de | 23.0 | 0.525 | | newstest2013.en.de | 26.9 | 0.553 | | newstest2015-ende.en.de | 31.1 | 0.594 | | newstest2016-ende.en.de | 37.0 | 0.636 | | newstest2017-ende.en.de | 29.9 | 0.586 | | newstest2018-ende.en.de | 45.2 | 0.690 | | newstest2019-ende.en.de | 40.9 | 0.654 | | Tatoeba.en.de | 47.3 | 0.664 |
Helsinki-NLP/opus-mt-en-dra
2021-01-18T08:06:48.000Z
[ "pytorch", "marian", "seq2seq", "en", "ta", "kn", "ml", "te", "dra", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
163
transformers
--- language: - en - ta - kn - ml - te - dra tags: - translation license: apache-2.0 --- ### eng-dra * source group: English * target group: Dravidian languages * OPUS readme: [eng-dra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-dra/README.md) * model: transformer * source language(s): eng * target language(s): kan mal tam tel * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.zip) * test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.test.txt) * test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-kan.eng.kan | 4.7 | 0.348 | | Tatoeba-test.eng-mal.eng.mal | 13.1 | 0.515 | | Tatoeba-test.eng.multi | 10.7 | 0.463 | | Tatoeba-test.eng-tam.eng.tam | 9.0 | 0.444 | | Tatoeba-test.eng-tel.eng.tel | 7.1 | 0.363 | ### System Info: - hf_name: eng-dra - source_languages: eng - target_languages: dra - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-dra/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'ta', 'kn', 'ml', 'te', 'dra'] - src_constituents: {'eng'} - tgt_constituents: {'tam', 'kan', 'mal', 'tel'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.test.txt - src_alpha3: eng - tgt_alpha3: dra - short_pair: en-dra - chrF2_score: 0.46299999999999997 - bleu: 10.7 - brevity_penalty: 1.0 - ref_len: 7928.0 - src_name: English - tgt_name: Dravidian languages - train_date: 2020-07-26 - src_alpha2: en - tgt_alpha2: dra - prefer_old: False - long_pair: eng-dra - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-ee
2021-01-18T08:06:53.000Z
[ "pytorch", "marian", "seq2seq", "en", "ee", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
98
transformers
--- tags: - translation --- ### opus-mt-en-ee * source languages: en * target languages: ee * OPUS readme: [en-ee](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ee/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ee/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ee/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ee/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ee | 38.2 | 0.591 | | Tatoeba.en.ee | 6.0 | 0.347 |
Helsinki-NLP/opus-mt-en-efi
2021-01-18T08:06:59.000Z
[ "pytorch", "marian", "seq2seq", "en", "efi", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
72
transformers
--- tags: - translation --- ### opus-mt-en-efi * source languages: en * target languages: efi * OPUS readme: [en-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-efi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-efi/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-efi/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-efi/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.efi | 38.0 | 0.568 |
Helsinki-NLP/opus-mt-en-el
2021-01-18T08:07:04.000Z
[ "pytorch", "marian", "seq2seq", "en", "el", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
317
transformers
--- tags: - translation --- ### opus-mt-en-el * source languages: en * target languages: el * OPUS readme: [en-el](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-el/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-el/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-el/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-el/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.el | 56.4 | 0.745 |
Helsinki-NLP/opus-mt-en-eo
2021-01-18T08:07:08.000Z
[ "pytorch", "marian", "seq2seq", "en", "eo", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
97
transformers
--- tags: - translation --- ### opus-mt-en-eo * source languages: en * target languages: eo * OPUS readme: [en-eo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-eo/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-eo/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-eo/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-eo/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.eo | 49.5 | 0.682 |
Helsinki-NLP/opus-mt-en-es
2021-01-18T08:07:13.000Z
[ "pytorch", "marian", "seq2seq", "en", "es", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
10,844
transformers
--- language: - en - es tags: - translation license: apache-2.0 --- ### eng-spa * source group: English * target group: Spanish * OPUS readme: [eng-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md) * model: transformer * source language(s): eng * target language(s): spa * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip) * test set translations: [opus-2020-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt) * test set scores: [opus-2020-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-engspa.eng.spa | 31.0 | 0.583 | | news-test2008-engspa.eng.spa | 29.7 | 0.564 | | newstest2009-engspa.eng.spa | 30.2 | 0.578 | | newstest2010-engspa.eng.spa | 36.9 | 0.620 | | newstest2011-engspa.eng.spa | 38.2 | 0.619 | | newstest2012-engspa.eng.spa | 39.0 | 0.625 | | newstest2013-engspa.eng.spa | 35.0 | 0.598 | | Tatoeba-test.eng.spa | 54.9 | 0.721 | ### System Info: - hf_name: eng-spa - source_languages: eng - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'es'] - src_constituents: {'eng'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt - src_alpha3: eng - tgt_alpha3: spa - short_pair: en-es - chrF2_score: 0.721 - bleu: 54.9 - brevity_penalty: 0.978 - ref_len: 77311.0 - src_name: English - tgt_name: Spanish - train_date: 2020-08-18 00:00:00 - src_alpha2: en - tgt_alpha2: es - prefer_old: False - long_pair: eng-spa - helsinki_git_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82 - transformers_git_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9 - port_machine: brutasse - port_time: 2020-08-24-18:20
Helsinki-NLP/opus-mt-en-et
2021-01-18T08:07:19.000Z
[ "pytorch", "marian", "seq2seq", "en", "et", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
362
transformers
--- tags: - translation --- ### opus-mt-en-et * source languages: en * target languages: et * OPUS readme: [en-et](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-et/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2018-enet.en.et | 21.8 | 0.540 | | newstest2018-enet.en.et | 23.3 | 0.556 | | Tatoeba.en.et | 54.0 | 0.717 |
Helsinki-NLP/opus-mt-en-eu
2021-01-18T08:07:25.000Z
[ "pytorch", "marian", "seq2seq", "en", "eu", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
76
transformers
--- language: - en - eu tags: - translation license: apache-2.0 --- ### eng-eus * source group: English * target group: Basque * OPUS readme: [eng-eus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-eus/README.md) * model: transformer-align * source language(s): eng * target language(s): eus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.eus | 31.8 | 0.590 | ### System Info: - hf_name: eng-eus - source_languages: eng - target_languages: eus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-eus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'eu'] - src_constituents: {'eng'} - tgt_constituents: {'eus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.test.txt - src_alpha3: eng - tgt_alpha3: eus - short_pair: en-eu - chrF2_score: 0.59 - bleu: 31.8 - brevity_penalty: 0.9440000000000001 - ref_len: 7080.0 - src_name: English - tgt_name: Basque - train_date: 2020-06-17 - src_alpha2: en - tgt_alpha2: eu - prefer_old: False - long_pair: eng-eus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-euq
2021-01-18T08:07:28.000Z
[ "pytorch", "marian", "seq2seq", "en", "euq", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
43
transformers
--- language: - en - euq tags: - translation license: apache-2.0 --- ### eng-euq * source group: English * target group: Basque (family) * OPUS readme: [eng-euq](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-euq/README.md) * model: transformer * source language(s): eng * target language(s): eus * model: transformer * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.zip) * test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.test.txt) * test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.eus | 27.9 | 0.555 | | Tatoeba-test.eng-eus.eng.eus | 27.9 | 0.555 | ### System Info: - hf_name: eng-euq - source_languages: eng - target_languages: euq - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-euq/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'euq'] - src_constituents: {'eng'} - tgt_constituents: {'eus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.test.txt - src_alpha3: eng - tgt_alpha3: euq - short_pair: en-euq - chrF2_score: 0.555 - bleu: 27.9 - brevity_penalty: 0.917 - ref_len: 7080.0 - src_name: English - tgt_name: Basque (family) - train_date: 2020-07-26 - src_alpha2: en - tgt_alpha2: euq - prefer_old: False - long_pair: eng-euq - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-fi
2021-01-18T08:07:34.000Z
[ "pytorch", "marian", "seq2seq", "en", "fi", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
746
transformers
--- tags: - translation --- ### opus-mt-en-fi * source languages: en * target languages: fi * OPUS readme: [en-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fi/README.md) * dataset: opus+bt-news * model: transformer * pre-processing: normalization + SentencePiece * download original weights: [opus+bt-news-2020-03-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.zip) * test set translations: [opus+bt-news-2020-03-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.test.txt) * test set scores: [opus+bt-news-2020-03-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newstest2019-enfi.en.fi | 25.7 | 0.578 |
Helsinki-NLP/opus-mt-en-fiu
2021-01-18T08:07:39.000Z
[ "pytorch", "marian", "seq2seq", "en", "se", "fi", "hu", "et", "fiu", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
34
transformers
--- language: - en - se - fi - hu - et - fiu tags: - translation license: apache-2.0 --- ### eng-fiu * source group: English * target group: Finno-Ugrian languages * OPUS readme: [eng-fiu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-fiu/README.md) * model: transformer * source language(s): eng * target language(s): est fin fkv_Latn hun izh kpv krl liv_Latn mdf mhr myv sma sme udm vro * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2015-enfi-engfin.eng.fin | 18.7 | 0.522 | | newsdev2018-enet-engest.eng.est | 19.4 | 0.521 | | newssyscomb2009-enghun.eng.hun | 15.5 | 0.472 | | newstest2009-enghun.eng.hun | 15.4 | 0.468 | | newstest2015-enfi-engfin.eng.fin | 19.9 | 0.532 | | newstest2016-enfi-engfin.eng.fin | 21.1 | 0.544 | | newstest2017-enfi-engfin.eng.fin | 23.8 | 0.567 | | newstest2018-enet-engest.eng.est | 20.4 | 0.532 | | newstest2018-enfi-engfin.eng.fin | 15.6 | 0.498 | | newstest2019-enfi-engfin.eng.fin | 20.0 | 0.520 | | newstestB2016-enfi-engfin.eng.fin | 17.0 | 0.512 | | newstestB2017-enfi-engfin.eng.fin | 19.7 | 0.531 | | Tatoeba-test.eng-chm.eng.chm | 0.9 | 0.115 | | Tatoeba-test.eng-est.eng.est | 49.8 | 0.689 | | Tatoeba-test.eng-fin.eng.fin | 34.7 | 0.597 | | Tatoeba-test.eng-fkv.eng.fkv | 1.3 | 0.187 | | Tatoeba-test.eng-hun.eng.hun | 35.2 | 0.589 | | Tatoeba-test.eng-izh.eng.izh | 6.0 | 0.163 | | Tatoeba-test.eng-kom.eng.kom | 3.4 | 0.012 | | Tatoeba-test.eng-krl.eng.krl | 6.4 | 0.202 | | Tatoeba-test.eng-liv.eng.liv | 1.6 | 0.102 | | Tatoeba-test.eng-mdf.eng.mdf | 3.7 | 0.008 | | Tatoeba-test.eng.multi | 35.4 | 0.590 | | Tatoeba-test.eng-myv.eng.myv | 1.4 | 0.014 | | Tatoeba-test.eng-sma.eng.sma | 2.6 | 0.097 | | Tatoeba-test.eng-sme.eng.sme | 7.3 | 0.221 | | Tatoeba-test.eng-udm.eng.udm | 1.4 | 0.079 | ### System Info: - hf_name: eng-fiu - source_languages: eng - target_languages: fiu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-fiu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'se', 'fi', 'hu', 'et', 'fiu'] - src_constituents: {'eng'} - tgt_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: fiu - short_pair: en-fiu - chrF2_score: 0.59 - bleu: 35.4 - brevity_penalty: 0.9440000000000001 - ref_len: 59311.0 - src_name: English - tgt_name: Finno-Ugrian languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: fiu - prefer_old: False - long_pair: eng-fiu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-fj
2021-01-18T08:07:44.000Z
[ "pytorch", "marian", "seq2seq", "en", "fj", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
47
transformers
--- tags: - translation --- ### opus-mt-en-fj * source languages: en * target languages: fj * OPUS readme: [en-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fj/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fj/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fj/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fj/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.fj | 34.0 | 0.561 | | Tatoeba.en.fj | 62.5 | 0.781 |
Helsinki-NLP/opus-mt-en-fr
2021-01-18T08:07:49.000Z
[ "pytorch", "marian", "seq2seq", "en", "fr", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
99,239
transformers
--- tags: - translation --- ### opus-mt-en-fr * source languages: en * target languages: fr * OPUS readme: [en-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fr/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr/opus-2020-02-26.test.txt) * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr/opus-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdiscussdev2015-enfr.en.fr | 33.8 | 0.602 | | newsdiscusstest2015-enfr.en.fr | 40.0 | 0.643 | | newssyscomb2009.en.fr | 29.8 | 0.584 | | news-test2008.en.fr | 27.5 | 0.554 | | newstest2009.en.fr | 29.4 | 0.577 | | newstest2010.en.fr | 32.7 | 0.596 | | newstest2011.en.fr | 34.3 | 0.611 | | newstest2012.en.fr | 31.8 | 0.592 | | newstest2013.en.fr | 33.2 | 0.589 | | Tatoeba.en.fr | 50.5 | 0.672 |
Helsinki-NLP/opus-mt-en-ga
2021-01-18T08:07:56.000Z
[ "pytorch", "marian", "seq2seq", "en", "ga", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
408
transformers
--- language: - en - ga tags: - translation license: apache-2.0 --- ### eng-gle * source group: English * target group: Irish * OPUS readme: [eng-gle](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gle/README.md) * model: transformer-align * source language(s): eng * target language(s): gle * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.gle | 37.5 | 0.593 | ### System Info: - hf_name: eng-gle - source_languages: eng - target_languages: gle - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gle/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'ga'] - src_constituents: {'eng'} - tgt_constituents: {'gle'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.test.txt - src_alpha3: eng - tgt_alpha3: gle - short_pair: en-ga - chrF2_score: 0.593 - bleu: 37.5 - brevity_penalty: 1.0 - ref_len: 12200.0 - src_name: English - tgt_name: Irish - train_date: 2020-06-17 - src_alpha2: en - tgt_alpha2: ga - prefer_old: False - long_pair: eng-gle - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-gaa
2021-01-18T08:08:01.000Z
[ "pytorch", "marian", "seq2seq", "en", "gaa", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
90
transformers
--- tags: - translation --- ### opus-mt-en-gaa * source languages: en * target languages: gaa * OPUS readme: [en-gaa](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gaa/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gaa/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gaa/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gaa/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.gaa | 39.9 | 0.593 |
Helsinki-NLP/opus-mt-en-gem
2021-01-18T08:08:05.000Z
[ "pytorch", "marian", "seq2seq", "en", "da", "sv", "af", "nn", "fy", "fo", "de", "nb", "nl", "is", "lb", "yi", "gem", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
39
transformers
--- language: - en - da - sv - af - nn - fy - fo - de - nb - nl - is - lb - yi - gem tags: - translation license: apache-2.0 --- ### eng-gem * source group: English * target group: Germanic languages * OPUS readme: [eng-gem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gem/README.md) * model: transformer * source language(s): eng * target language(s): afr ang_Latn dan deu enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-engdeu.eng.deu | 20.9 | 0.521 | | news-test2008-engdeu.eng.deu | 21.1 | 0.511 | | newstest2009-engdeu.eng.deu | 20.5 | 0.516 | | newstest2010-engdeu.eng.deu | 22.5 | 0.526 | | newstest2011-engdeu.eng.deu | 20.5 | 0.508 | | newstest2012-engdeu.eng.deu | 20.8 | 0.507 | | newstest2013-engdeu.eng.deu | 24.6 | 0.534 | | newstest2015-ende-engdeu.eng.deu | 27.9 | 0.569 | | newstest2016-ende-engdeu.eng.deu | 33.2 | 0.607 | | newstest2017-ende-engdeu.eng.deu | 26.5 | 0.560 | | newstest2018-ende-engdeu.eng.deu | 39.4 | 0.648 | | newstest2019-ende-engdeu.eng.deu | 35.0 | 0.613 | | Tatoeba-test.eng-afr.eng.afr | 56.5 | 0.745 | | Tatoeba-test.eng-ang.eng.ang | 6.7 | 0.154 | | Tatoeba-test.eng-dan.eng.dan | 58.0 | 0.726 | | Tatoeba-test.eng-deu.eng.deu | 40.3 | 0.615 | | Tatoeba-test.eng-enm.eng.enm | 1.4 | 0.215 | | Tatoeba-test.eng-fao.eng.fao | 7.2 | 0.304 | | Tatoeba-test.eng-frr.eng.frr | 5.5 | 0.159 | | Tatoeba-test.eng-fry.eng.fry | 19.4 | 0.433 | | Tatoeba-test.eng-gos.eng.gos | 1.0 | 0.182 | | Tatoeba-test.eng-got.eng.got | 0.3 | 0.012 | | Tatoeba-test.eng-gsw.eng.gsw | 0.9 | 0.130 | | Tatoeba-test.eng-isl.eng.isl | 23.4 | 0.505 | | Tatoeba-test.eng-ksh.eng.ksh | 1.1 | 0.141 | | Tatoeba-test.eng-ltz.eng.ltz | 20.3 | 0.379 | | Tatoeba-test.eng.multi | 46.5 | 0.641 | | Tatoeba-test.eng-nds.eng.nds | 20.6 | 0.458 | | Tatoeba-test.eng-nld.eng.nld | 53.4 | 0.702 | | Tatoeba-test.eng-non.eng.non | 0.6 | 0.166 | | Tatoeba-test.eng-nor.eng.nor | 50.3 | 0.679 | | Tatoeba-test.eng-pdc.eng.pdc | 3.9 | 0.189 | | Tatoeba-test.eng-sco.eng.sco | 33.0 | 0.542 | | Tatoeba-test.eng-stq.eng.stq | 2.3 | 0.274 | | Tatoeba-test.eng-swe.eng.swe | 57.9 | 0.719 | | Tatoeba-test.eng-swg.eng.swg | 1.2 | 0.171 | | Tatoeba-test.eng-yid.eng.yid | 7.2 | 0.304 | ### System Info: - hf_name: eng-gem - source_languages: eng - target_languages: gem - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gem/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'lb', 'yi', 'gem'] - src_constituents: {'eng'} - tgt_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: gem - short_pair: en-gem - chrF2_score: 0.6409999999999999 - bleu: 46.5 - brevity_penalty: 0.9790000000000001 - ref_len: 73328.0 - src_name: English - tgt_name: Germanic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: gem - prefer_old: False - long_pair: eng-gem - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-gil
2021-01-18T08:08:11.000Z
[ "pytorch", "marian", "seq2seq", "en", "gil", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
62
transformers
--- tags: - translation --- ### opus-mt-en-gil * source languages: en * target languages: gil * OPUS readme: [en-gil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gil/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gil/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gil/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gil/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.gil | 38.8 | 0.604 |
Helsinki-NLP/opus-mt-en-gl
2021-01-18T08:08:16.000Z
[ "pytorch", "marian", "seq2seq", "en", "gl", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
107
transformers
--- tags: - translation --- ### opus-mt-en-gl * source languages: en * target languages: gl * OPUS readme: [en-gl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gl/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gl/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gl/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.gl | 36.4 | 0.572 |
Helsinki-NLP/opus-mt-en-gmq
2021-01-18T08:08:21.000Z
[ "pytorch", "marian", "seq2seq", "en", "da", "nb", "sv", "is", "nn", "fo", "gmq", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
27
transformers
--- language: - en - da - nb - sv - is - nn - fo - gmq tags: - translation license: apache-2.0 --- ### eng-gmq * source group: English * target group: North Germanic languages * OPUS readme: [eng-gmq](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmq/README.md) * model: transformer * source language(s): eng * target language(s): dan fao isl nno nob nob_Hebr non_Latn swe * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-dan.eng.dan | 57.7 | 0.724 | | Tatoeba-test.eng-fao.eng.fao | 9.2 | 0.322 | | Tatoeba-test.eng-isl.eng.isl | 23.8 | 0.506 | | Tatoeba-test.eng.multi | 52.8 | 0.688 | | Tatoeba-test.eng-non.eng.non | 0.7 | 0.196 | | Tatoeba-test.eng-nor.eng.nor | 50.3 | 0.678 | | Tatoeba-test.eng-swe.eng.swe | 57.8 | 0.717 | ### System Info: - hf_name: eng-gmq - source_languages: eng - target_languages: gmq - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmq/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq'] - src_constituents: {'eng'} - tgt_constituents: {'dan', 'nob', 'nob_Hebr', 'swe', 'isl', 'nno', 'non_Latn', 'fao'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: gmq - short_pair: en-gmq - chrF2_score: 0.688 - bleu: 52.8 - brevity_penalty: 0.973 - ref_len: 71881.0 - src_name: English - tgt_name: North Germanic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: gmq - prefer_old: False - long_pair: eng-gmq - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-gmw
2021-01-18T08:08:26.000Z
[ "pytorch", "marian", "seq2seq", "en", "nl", "lb", "af", "de", "fy", "yi", "gmw", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
36
transformers
--- language: - en - nl - lb - af - de - fy - yi - gmw tags: - translation license: apache-2.0 --- ### eng-gmw * source group: English * target group: West Germanic languages * OPUS readme: [eng-gmw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmw/README.md) * model: transformer * source language(s): eng * target language(s): afr ang_Latn deu enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-engdeu.eng.deu | 21.4 | 0.518 | | news-test2008-engdeu.eng.deu | 21.0 | 0.510 | | newstest2009-engdeu.eng.deu | 20.4 | 0.513 | | newstest2010-engdeu.eng.deu | 22.9 | 0.528 | | newstest2011-engdeu.eng.deu | 20.5 | 0.508 | | newstest2012-engdeu.eng.deu | 21.0 | 0.507 | | newstest2013-engdeu.eng.deu | 24.7 | 0.533 | | newstest2015-ende-engdeu.eng.deu | 28.2 | 0.568 | | newstest2016-ende-engdeu.eng.deu | 33.3 | 0.605 | | newstest2017-ende-engdeu.eng.deu | 26.5 | 0.559 | | newstest2018-ende-engdeu.eng.deu | 39.9 | 0.649 | | newstest2019-ende-engdeu.eng.deu | 35.9 | 0.616 | | Tatoeba-test.eng-afr.eng.afr | 55.7 | 0.740 | | Tatoeba-test.eng-ang.eng.ang | 6.5 | 0.164 | | Tatoeba-test.eng-deu.eng.deu | 40.4 | 0.614 | | Tatoeba-test.eng-enm.eng.enm | 2.3 | 0.254 | | Tatoeba-test.eng-frr.eng.frr | 8.4 | 0.248 | | Tatoeba-test.eng-fry.eng.fry | 17.9 | 0.424 | | Tatoeba-test.eng-gos.eng.gos | 2.2 | 0.309 | | Tatoeba-test.eng-gsw.eng.gsw | 1.6 | 0.186 | | Tatoeba-test.eng-ksh.eng.ksh | 1.5 | 0.189 | | Tatoeba-test.eng-ltz.eng.ltz | 20.2 | 0.383 | | Tatoeba-test.eng.multi | 41.6 | 0.609 | | Tatoeba-test.eng-nds.eng.nds | 18.9 | 0.437 | | Tatoeba-test.eng-nld.eng.nld | 53.1 | 0.699 | | Tatoeba-test.eng-pdc.eng.pdc | 7.7 | 0.262 | | Tatoeba-test.eng-sco.eng.sco | 37.7 | 0.557 | | Tatoeba-test.eng-stq.eng.stq | 5.9 | 0.380 | | Tatoeba-test.eng-swg.eng.swg | 6.2 | 0.236 | | Tatoeba-test.eng-yid.eng.yid | 6.8 | 0.296 | ### System Info: - hf_name: eng-gmw - source_languages: eng - target_languages: gmw - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmw/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'nl', 'lb', 'af', 'de', 'fy', 'yi', 'gmw'] - src_constituents: {'eng'} - tgt_constituents: {'ksh', 'nld', 'eng', 'enm_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: gmw - short_pair: en-gmw - chrF2_score: 0.609 - bleu: 41.6 - brevity_penalty: 0.9890000000000001 - ref_len: 74922.0 - src_name: English - tgt_name: West Germanic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: gmw - prefer_old: False - long_pair: eng-gmw - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-grk
2021-01-18T08:08:31.000Z
[ "pytorch", "marian", "seq2seq", "en", "el", "grk", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
64
transformers
--- language: - en - el - grk tags: - translation license: apache-2.0 --- ### eng-grk * source group: English * target group: Greek languages * OPUS readme: [eng-grk](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-grk/README.md) * model: transformer * source language(s): eng * target language(s): ell grc_Grek * model: transformer * pre-processing: normalization + SentencePiece (spm12k,spm12k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-ell.eng.ell | 53.8 | 0.723 | | Tatoeba-test.eng-grc.eng.grc | 0.1 | 0.102 | | Tatoeba-test.eng.multi | 45.6 | 0.677 | ### System Info: - hf_name: eng-grk - source_languages: eng - target_languages: grk - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-grk/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'el', 'grk'] - src_constituents: {'eng'} - tgt_constituents: {'grc_Grek', 'ell'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: grk - short_pair: en-grk - chrF2_score: 0.677 - bleu: 45.6 - brevity_penalty: 1.0 - ref_len: 59951.0 - src_name: English - tgt_name: Greek languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: grk - prefer_old: False - long_pair: eng-grk - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-guw
2021-01-18T08:08:36.000Z
[ "pytorch", "marian", "seq2seq", "en", "guw", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
77
transformers
--- tags: - translation --- ### opus-mt-en-guw * source languages: en * target languages: guw * OPUS readme: [en-guw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-guw/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-guw/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-guw/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-guw/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.guw | 45.7 | 0.634 |
Helsinki-NLP/opus-mt-en-gv
2021-01-18T08:08:40.000Z
[ "pytorch", "marian", "seq2seq", "en", "gv", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
74
transformers
--- tags: - translation --- ### opus-mt-en-gv * source languages: en * target languages: gv * OPUS readme: [en-gv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | bible-uedin.en.gv | 70.1 | 0.885 |
Helsinki-NLP/opus-mt-en-ha
2021-01-18T08:08:46.000Z
[ "pytorch", "marian", "seq2seq", "en", "ha", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
76
transformers
--- tags: - translation --- ### opus-mt-en-ha * source languages: en * target languages: ha * OPUS readme: [en-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ha/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ha/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ha/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ha/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ha | 34.1 | 0.544 | | Tatoeba.en.ha | 17.6 | 0.498 |
Helsinki-NLP/opus-mt-en-he
2021-01-18T08:08:52.000Z
[ "pytorch", "marian", "seq2seq", "en", "he", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
345
transformers
--- tags: - translation --- ### opus-mt-en-he * source languages: en * target languages: he * OPUS readme: [en-he](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-he/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.he | 40.1 | 0.609 |
Helsinki-NLP/opus-mt-en-hi
2021-03-02T16:17:47.000Z
[ "pytorch", "rust", "marian", "seq2seq", "en", "hi", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "rust_model.ot", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
1,009
transformers
--- language: - en - hi tags: - translation license: apache-2.0 --- ### eng-hin * source group: English * target group: Hindi * OPUS readme: [eng-hin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hin/README.md) * model: transformer-align * source language(s): eng * target language(s): hin * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014.eng.hin | 6.9 | 0.296 | | newstest2014-hien.eng.hin | 9.9 | 0.323 | | Tatoeba-test.eng.hin | 16.1 | 0.447 | ### System Info: - hf_name: eng-hin - source_languages: eng - target_languages: hin - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hin/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'hi'] - src_constituents: {'eng'} - tgt_constituents: {'hin'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.test.txt - src_alpha3: eng - tgt_alpha3: hin - short_pair: en-hi - chrF2_score: 0.447 - bleu: 16.1 - brevity_penalty: 1.0 - ref_len: 32904.0 - src_name: English - tgt_name: Hindi - train_date: 2020-06-17 - src_alpha2: en - tgt_alpha2: hi - prefer_old: False - long_pair: eng-hin - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-hil
2021-01-18T08:09:02.000Z
[ "pytorch", "marian", "seq2seq", "en", "hil", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
60
transformers
--- tags: - translation --- ### opus-mt-en-hil * source languages: en * target languages: hil * OPUS readme: [en-hil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-hil/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-hil/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hil/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hil/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.hil | 49.4 | 0.696 |
Helsinki-NLP/opus-mt-en-ho
2021-01-18T08:09:06.000Z
[ "pytorch", "marian", "seq2seq", "en", "ho", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
55
transformers
--- tags: - translation --- ### opus-mt-en-ho * source languages: en * target languages: ho * OPUS readme: [en-ho](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ho/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ho/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ho/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ho/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ho | 33.9 | 0.563 |
Helsinki-NLP/opus-mt-en-ht
2021-01-18T08:09:12.000Z
[ "pytorch", "marian", "seq2seq", "en", "ht", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
77
transformers
--- tags: - translation --- ### opus-mt-en-ht * source languages: en * target languages: ht * OPUS readme: [en-ht](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ht/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ht/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ht/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ht/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ht | 38.3 | 0.545 | | Tatoeba.en.ht | 45.2 | 0.592 |
Helsinki-NLP/opus-mt-en-hu
2021-01-18T08:09:17.000Z
[ "pytorch", "marian", "seq2seq", "en", "hu", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
623
transformers
--- tags: - translation --- ### opus-mt-en-hu * source languages: en * target languages: hu * OPUS readme: [en-hu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-hu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.hu | 40.1 | 0.628 |
Helsinki-NLP/opus-mt-en-hy
2021-01-18T08:09:21.000Z
[ "pytorch", "marian", "seq2seq", "en", "hy", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
73
transformers
--- language: - en - hy tags: - translation license: apache-2.0 --- ### eng-hye * source group: English * target group: Armenian * OPUS readme: [eng-hye](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hye/README.md) * model: transformer-align * source language(s): eng * target language(s): hye * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.hye | 16.6 | 0.404 | ### System Info: - hf_name: eng-hye - source_languages: eng - target_languages: hye - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hye/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'hy'] - src_constituents: {'eng'} - tgt_constituents: {'hye', 'hye_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.test.txt - src_alpha3: eng - tgt_alpha3: hye - short_pair: en-hy - chrF2_score: 0.40399999999999997 - bleu: 16.6 - brevity_penalty: 1.0 - ref_len: 5115.0 - src_name: English - tgt_name: Armenian - train_date: 2020-06-16 - src_alpha2: en - tgt_alpha2: hy - prefer_old: False - long_pair: eng-hye - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-id
2021-01-18T08:09:27.000Z
[ "pytorch", "marian", "seq2seq", "en", "id", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
279
transformers
--- tags: - translation --- ### opus-mt-en-id * source languages: en * target languages: id * OPUS readme: [en-id](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-id/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-id/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-id/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-id/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.id | 38.3 | 0.636 |
Helsinki-NLP/opus-mt-en-ig
2021-01-18T08:09:32.000Z
[ "pytorch", "marian", "seq2seq", "en", "ig", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
70
transformers
--- tags: - translation --- ### opus-mt-en-ig * source languages: en * target languages: ig * OPUS readme: [en-ig](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ig/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ig/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ig/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ig/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ig | 39.5 | 0.546 | | Tatoeba.en.ig | 3.8 | 0.297 |
Helsinki-NLP/opus-mt-en-iir
2021-01-18T08:09:38.000Z
[ "pytorch", "marian", "seq2seq", "en", "bn", "or", "gu", "mr", "ur", "hi", "ps", "os", "as", "si", "iir", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
24
transformers
--- language: - en - bn - or - gu - mr - ur - hi - ps - os - as - si - iir tags: - translation license: apache-2.0 --- ### eng-iir * source group: English * target group: Indo-Iranian languages * OPUS readme: [eng-iir](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-iir/README.md) * model: transformer * source language(s): eng * target language(s): asm awa ben bho gom guj hif_Latn hin jdt_Cyrl kur_Arab kur_Latn mai mar npi ori oss pan_Guru pes pes_Latn pes_Thaa pnb pus rom san_Deva sin snd_Arab tgk_Cyrl tly_Latn urd zza * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014-enghin.eng.hin | 6.7 | 0.326 | | newsdev2019-engu-engguj.eng.guj | 6.0 | 0.283 | | newstest2014-hien-enghin.eng.hin | 10.4 | 0.353 | | newstest2019-engu-engguj.eng.guj | 6.6 | 0.282 | | Tatoeba-test.eng-asm.eng.asm | 2.7 | 0.249 | | Tatoeba-test.eng-awa.eng.awa | 0.4 | 0.122 | | Tatoeba-test.eng-ben.eng.ben | 15.3 | 0.459 | | Tatoeba-test.eng-bho.eng.bho | 3.7 | 0.161 | | Tatoeba-test.eng-fas.eng.fas | 3.4 | 0.227 | | Tatoeba-test.eng-guj.eng.guj | 18.5 | 0.365 | | Tatoeba-test.eng-hif.eng.hif | 1.0 | 0.064 | | Tatoeba-test.eng-hin.eng.hin | 17.0 | 0.461 | | Tatoeba-test.eng-jdt.eng.jdt | 3.9 | 0.122 | | Tatoeba-test.eng-kok.eng.kok | 5.5 | 0.059 | | Tatoeba-test.eng-kur.eng.kur | 4.0 | 0.125 | | Tatoeba-test.eng-lah.eng.lah | 0.3 | 0.008 | | Tatoeba-test.eng-mai.eng.mai | 9.3 | 0.445 | | Tatoeba-test.eng-mar.eng.mar | 20.7 | 0.473 | | Tatoeba-test.eng.multi | 13.7 | 0.392 | | Tatoeba-test.eng-nep.eng.nep | 0.6 | 0.060 | | Tatoeba-test.eng-ori.eng.ori | 2.4 | 0.193 | | Tatoeba-test.eng-oss.eng.oss | 2.1 | 0.174 | | Tatoeba-test.eng-pan.eng.pan | 9.7 | 0.355 | | Tatoeba-test.eng-pus.eng.pus | 1.0 | 0.126 | | Tatoeba-test.eng-rom.eng.rom | 1.3 | 0.230 | | Tatoeba-test.eng-san.eng.san | 1.3 | 0.101 | | Tatoeba-test.eng-sin.eng.sin | 11.7 | 0.384 | | Tatoeba-test.eng-snd.eng.snd | 2.8 | 0.180 | | Tatoeba-test.eng-tgk.eng.tgk | 8.1 | 0.353 | | Tatoeba-test.eng-tly.eng.tly | 0.5 | 0.015 | | Tatoeba-test.eng-urd.eng.urd | 12.3 | 0.409 | | Tatoeba-test.eng-zza.eng.zza | 0.5 | 0.025 | ### System Info: - hf_name: eng-iir - source_languages: eng - target_languages: iir - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-iir/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir'] - src_constituents: {'eng'} - tgt_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur_Arab', 'tgk_Cyrl', 'hin', 'kur_Latn', 'pes_Thaa', 'pus', 'san_Deva', 'oss', 'tly_Latn', 'jdt_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes_Latn', 'awa', 'sin'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: iir - short_pair: en-iir - chrF2_score: 0.392 - bleu: 13.7 - brevity_penalty: 1.0 - ref_len: 63351.0 - src_name: English - tgt_name: Indo-Iranian languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: iir - prefer_old: False - long_pair: eng-iir - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-ilo
2021-01-18T08:09:43.000Z
[ "pytorch", "marian", "seq2seq", "en", "ilo", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
68
transformers
--- tags: - translation --- ### opus-mt-en-ilo * source languages: en * target languages: ilo * OPUS readme: [en-ilo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ilo/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ilo/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ilo/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ilo/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.ilo | 33.2 | 0.584 |
Helsinki-NLP/opus-mt-en-inc
2021-01-18T08:09:48.000Z
[ "pytorch", "marian", "seq2seq", "en", "bn", "or", "gu", "mr", "ur", "hi", "as", "si", "inc", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
27
transformers
--- language: - en - bn - or - gu - mr - ur - hi - as - si - inc tags: - translation license: apache-2.0 --- ### eng-inc * source group: English * target group: Indic languages * OPUS readme: [eng-inc](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-inc/README.md) * model: transformer * source language(s): eng * target language(s): asm awa ben bho gom guj hif_Latn hin mai mar npi ori pan_Guru pnb rom san_Deva sin snd_Arab urd * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014-enghin.eng.hin | 8.2 | 0.342 | | newsdev2019-engu-engguj.eng.guj | 6.5 | 0.293 | | newstest2014-hien-enghin.eng.hin | 11.4 | 0.364 | | newstest2019-engu-engguj.eng.guj | 7.2 | 0.296 | | Tatoeba-test.eng-asm.eng.asm | 2.7 | 0.277 | | Tatoeba-test.eng-awa.eng.awa | 0.5 | 0.132 | | Tatoeba-test.eng-ben.eng.ben | 16.7 | 0.470 | | Tatoeba-test.eng-bho.eng.bho | 4.3 | 0.227 | | Tatoeba-test.eng-guj.eng.guj | 17.5 | 0.373 | | Tatoeba-test.eng-hif.eng.hif | 0.6 | 0.028 | | Tatoeba-test.eng-hin.eng.hin | 17.7 | 0.469 | | Tatoeba-test.eng-kok.eng.kok | 1.7 | 0.000 | | Tatoeba-test.eng-lah.eng.lah | 0.3 | 0.028 | | Tatoeba-test.eng-mai.eng.mai | 15.6 | 0.429 | | Tatoeba-test.eng-mar.eng.mar | 21.3 | 0.477 | | Tatoeba-test.eng.multi | 17.3 | 0.448 | | Tatoeba-test.eng-nep.eng.nep | 0.8 | 0.081 | | Tatoeba-test.eng-ori.eng.ori | 2.2 | 0.208 | | Tatoeba-test.eng-pan.eng.pan | 8.0 | 0.347 | | Tatoeba-test.eng-rom.eng.rom | 0.4 | 0.197 | | Tatoeba-test.eng-san.eng.san | 0.5 | 0.108 | | Tatoeba-test.eng-sin.eng.sin | 9.1 | 0.364 | | Tatoeba-test.eng-snd.eng.snd | 4.4 | 0.284 | | Tatoeba-test.eng-urd.eng.urd | 13.3 | 0.423 | ### System Info: - hf_name: eng-inc - source_languages: eng - target_languages: inc - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-inc/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc'] - src_constituents: {'eng'} - tgt_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: inc - short_pair: en-inc - chrF2_score: 0.44799999999999995 - bleu: 17.3 - brevity_penalty: 1.0 - ref_len: 59917.0 - src_name: English - tgt_name: Indic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: inc - prefer_old: False - long_pair: eng-inc - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-ine
2021-01-18T08:09:54.000Z
[ "pytorch", "marian", "seq2seq", "en", "ca", "es", "os", "ro", "fy", "cy", "sc", "is", "yi", "lb", "an", "sq", "fr", "ht", "rm", "ps", "af", "uk", "sl", "lt", "bg", "be", "gd", "si", "br", "mk", "or", "mr", "ru", "fo", "co", "oc", "pl", "gl", "nb", "bn", "id", "hy", "da", "gv", "nl", "pt", "hi", "as", "kw", "ga", "sv", "gu", "wa", "lv", "el", "it", "hr", "ur", "nn", "de", "cs", "ine", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
30
transformers
--- language: - en - ca - es - os - ro - fy - cy - sc - is - yi - lb - an - sq - fr - ht - rm - ps - af - uk - sl - lt - bg - be - gd - si - br - mk - or - mr - ru - fo - co - oc - pl - gl - nb - bn - id - hy - da - gv - nl - pt - hi - as - kw - ga - sv - gu - wa - lv - el - it - hr - ur - nn - de - cs - ine tags: - translation license: apache-2.0 --- ### eng-ine * source group: English * target group: Indo-European languages * OPUS readme: [eng-ine](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ine/README.md) * model: transformer * source language(s): eng * target language(s): afr aln ang_Latn arg asm ast awa bel bel_Latn ben bho bos_Latn bre bul bul_Latn cat ces cor cos csb_Latn cym dan deu dsb egl ell enm_Latn ext fao fra frm_Latn frr fry gcf_Latn gla gle glg glv gom gos got_Goth grc_Grek gsw guj hat hif_Latn hin hrv hsb hye ind isl ita jdt_Cyrl ksh kur_Arab kur_Latn lad lad_Latn lat_Latn lav lij lit lld_Latn lmo ltg ltz mai mar max_Latn mfe min mkd mwl nds nld nno nob nob_Hebr non_Latn npi oci ori orv_Cyrl oss pan_Guru pap pdc pes pes_Latn pes_Thaa pms pnb pol por prg_Latn pus roh rom ron rue rus san_Deva scn sco sgs sin slv snd_Arab spa sqi srp_Cyrl srp_Latn stq swe swg tgk_Cyrl tly_Latn tmw_Latn ukr urd vec wln yid zlm_Latn zsm_Latn zza * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014-enghin.eng.hin | 6.2 | 0.317 | | newsdev2016-enro-engron.eng.ron | 22.1 | 0.525 | | newsdev2017-enlv-englav.eng.lav | 17.4 | 0.486 | | newsdev2019-engu-engguj.eng.guj | 6.5 | 0.303 | | newsdev2019-enlt-englit.eng.lit | 14.9 | 0.476 | | newsdiscussdev2015-enfr-engfra.eng.fra | 26.4 | 0.547 | | newsdiscusstest2015-enfr-engfra.eng.fra | 30.0 | 0.575 | | newssyscomb2009-engces.eng.ces | 14.7 | 0.442 | | newssyscomb2009-engdeu.eng.deu | 16.7 | 0.487 | | newssyscomb2009-engfra.eng.fra | 24.8 | 0.547 | | newssyscomb2009-engita.eng.ita | 25.2 | 0.562 | | newssyscomb2009-engspa.eng.spa | 27.0 | 0.554 | | news-test2008-engces.eng.ces | 13.0 | 0.417 | | news-test2008-engdeu.eng.deu | 17.4 | 0.480 | | news-test2008-engfra.eng.fra | 22.3 | 0.519 | | news-test2008-engspa.eng.spa | 24.9 | 0.532 | | newstest2009-engces.eng.ces | 13.6 | 0.432 | | newstest2009-engdeu.eng.deu | 16.6 | 0.482 | | newstest2009-engfra.eng.fra | 23.5 | 0.535 | | newstest2009-engita.eng.ita | 25.5 | 0.561 | | newstest2009-engspa.eng.spa | 26.3 | 0.551 | | newstest2010-engces.eng.ces | 14.2 | 0.436 | | newstest2010-engdeu.eng.deu | 18.3 | 0.492 | | newstest2010-engfra.eng.fra | 25.7 | 0.550 | | newstest2010-engspa.eng.spa | 30.5 | 0.578 | | newstest2011-engces.eng.ces | 15.1 | 0.439 | | newstest2011-engdeu.eng.deu | 17.1 | 0.478 | | newstest2011-engfra.eng.fra | 28.0 | 0.569 | | newstest2011-engspa.eng.spa | 31.9 | 0.580 | | newstest2012-engces.eng.ces | 13.6 | 0.418 | | newstest2012-engdeu.eng.deu | 17.0 | 0.475 | | newstest2012-engfra.eng.fra | 26.1 | 0.553 | | newstest2012-engrus.eng.rus | 21.4 | 0.506 | | newstest2012-engspa.eng.spa | 31.4 | 0.577 | | newstest2013-engces.eng.ces | 15.3 | 0.438 | | newstest2013-engdeu.eng.deu | 20.3 | 0.501 | | newstest2013-engfra.eng.fra | 26.0 | 0.540 | | newstest2013-engrus.eng.rus | 16.1 | 0.449 | | newstest2013-engspa.eng.spa | 28.6 | 0.555 | | newstest2014-hien-enghin.eng.hin | 9.5 | 0.344 | | newstest2015-encs-engces.eng.ces | 14.8 | 0.440 | | newstest2015-ende-engdeu.eng.deu | 22.6 | 0.523 | | newstest2015-enru-engrus.eng.rus | 18.8 | 0.483 | | newstest2016-encs-engces.eng.ces | 16.8 | 0.457 | | newstest2016-ende-engdeu.eng.deu | 26.2 | 0.555 | | newstest2016-enro-engron.eng.ron | 21.2 | 0.510 | | newstest2016-enru-engrus.eng.rus | 17.6 | 0.471 | | newstest2017-encs-engces.eng.ces | 13.6 | 0.421 | | newstest2017-ende-engdeu.eng.deu | 21.5 | 0.516 | | newstest2017-enlv-englav.eng.lav | 13.0 | 0.452 | | newstest2017-enru-engrus.eng.rus | 18.7 | 0.486 | | newstest2018-encs-engces.eng.ces | 13.5 | 0.425 | | newstest2018-ende-engdeu.eng.deu | 29.8 | 0.581 | | newstest2018-enru-engrus.eng.rus | 16.1 | 0.472 | | newstest2019-encs-engces.eng.ces | 14.8 | 0.435 | | newstest2019-ende-engdeu.eng.deu | 26.6 | 0.554 | | newstest2019-engu-engguj.eng.guj | 6.9 | 0.313 | | newstest2019-enlt-englit.eng.lit | 10.6 | 0.429 | | newstest2019-enru-engrus.eng.rus | 17.5 | 0.452 | | Tatoeba-test.eng-afr.eng.afr | 52.1 | 0.708 | | Tatoeba-test.eng-ang.eng.ang | 5.1 | 0.131 | | Tatoeba-test.eng-arg.eng.arg | 1.2 | 0.099 | | Tatoeba-test.eng-asm.eng.asm | 2.9 | 0.259 | | Tatoeba-test.eng-ast.eng.ast | 14.1 | 0.408 | | Tatoeba-test.eng-awa.eng.awa | 0.3 | 0.002 | | Tatoeba-test.eng-bel.eng.bel | 18.1 | 0.450 | | Tatoeba-test.eng-ben.eng.ben | 13.5 | 0.432 | | Tatoeba-test.eng-bho.eng.bho | 0.3 | 0.003 | | Tatoeba-test.eng-bre.eng.bre | 10.4 | 0.318 | | Tatoeba-test.eng-bul.eng.bul | 38.7 | 0.592 | | Tatoeba-test.eng-cat.eng.cat | 42.0 | 0.633 | | Tatoeba-test.eng-ces.eng.ces | 32.3 | 0.546 | | Tatoeba-test.eng-cor.eng.cor | 0.5 | 0.079 | | Tatoeba-test.eng-cos.eng.cos | 3.1 | 0.148 | | Tatoeba-test.eng-csb.eng.csb | 1.4 | 0.216 | | Tatoeba-test.eng-cym.eng.cym | 22.4 | 0.470 | | Tatoeba-test.eng-dan.eng.dan | 49.7 | 0.671 | | Tatoeba-test.eng-deu.eng.deu | 31.7 | 0.554 | | Tatoeba-test.eng-dsb.eng.dsb | 1.1 | 0.139 | | Tatoeba-test.eng-egl.eng.egl | 0.9 | 0.089 | | Tatoeba-test.eng-ell.eng.ell | 42.7 | 0.640 | | Tatoeba-test.eng-enm.eng.enm | 3.5 | 0.259 | | Tatoeba-test.eng-ext.eng.ext | 6.4 | 0.235 | | Tatoeba-test.eng-fao.eng.fao | 6.6 | 0.285 | | Tatoeba-test.eng-fas.eng.fas | 5.7 | 0.257 | | Tatoeba-test.eng-fra.eng.fra | 38.4 | 0.595 | | Tatoeba-test.eng-frm.eng.frm | 0.9 | 0.149 | | Tatoeba-test.eng-frr.eng.frr | 8.4 | 0.145 | | Tatoeba-test.eng-fry.eng.fry | 16.5 | 0.411 | | Tatoeba-test.eng-gcf.eng.gcf | 0.6 | 0.098 | | Tatoeba-test.eng-gla.eng.gla | 11.6 | 0.361 | | Tatoeba-test.eng-gle.eng.gle | 32.5 | 0.546 | | Tatoeba-test.eng-glg.eng.glg | 38.4 | 0.602 | | Tatoeba-test.eng-glv.eng.glv | 23.1 | 0.418 | | Tatoeba-test.eng-gos.eng.gos | 0.7 | 0.137 | | Tatoeba-test.eng-got.eng.got | 0.2 | 0.010 | | Tatoeba-test.eng-grc.eng.grc | 0.0 | 0.005 | | Tatoeba-test.eng-gsw.eng.gsw | 0.9 | 0.108 | | Tatoeba-test.eng-guj.eng.guj | 20.8 | 0.391 | | Tatoeba-test.eng-hat.eng.hat | 34.0 | 0.537 | | Tatoeba-test.eng-hbs.eng.hbs | 33.7 | 0.567 | | Tatoeba-test.eng-hif.eng.hif | 2.8 | 0.269 | | Tatoeba-test.eng-hin.eng.hin | 15.6 | 0.437 | | Tatoeba-test.eng-hsb.eng.hsb | 5.4 | 0.320 | | Tatoeba-test.eng-hye.eng.hye | 17.4 | 0.426 | | Tatoeba-test.eng-isl.eng.isl | 17.4 | 0.436 | | Tatoeba-test.eng-ita.eng.ita | 40.4 | 0.636 | | Tatoeba-test.eng-jdt.eng.jdt | 6.4 | 0.008 | | Tatoeba-test.eng-kok.eng.kok | 6.6 | 0.005 | | Tatoeba-test.eng-ksh.eng.ksh | 0.8 | 0.123 | | Tatoeba-test.eng-kur.eng.kur | 10.2 | 0.209 | | Tatoeba-test.eng-lad.eng.lad | 0.8 | 0.163 | | Tatoeba-test.eng-lah.eng.lah | 0.2 | 0.001 | | Tatoeba-test.eng-lat.eng.lat | 9.4 | 0.372 | | Tatoeba-test.eng-lav.eng.lav | 30.3 | 0.559 | | Tatoeba-test.eng-lij.eng.lij | 1.0 | 0.130 | | Tatoeba-test.eng-lit.eng.lit | 25.3 | 0.560 | | Tatoeba-test.eng-lld.eng.lld | 0.4 | 0.139 | | Tatoeba-test.eng-lmo.eng.lmo | 0.6 | 0.108 | | Tatoeba-test.eng-ltz.eng.ltz | 18.1 | 0.388 | | Tatoeba-test.eng-mai.eng.mai | 17.2 | 0.464 | | Tatoeba-test.eng-mar.eng.mar | 18.0 | 0.451 | | Tatoeba-test.eng-mfe.eng.mfe | 81.0 | 0.899 | | Tatoeba-test.eng-mkd.eng.mkd | 37.6 | 0.587 | | Tatoeba-test.eng-msa.eng.msa | 27.7 | 0.519 | | Tatoeba-test.eng.multi | 32.6 | 0.539 | | Tatoeba-test.eng-mwl.eng.mwl | 3.8 | 0.134 | | Tatoeba-test.eng-nds.eng.nds | 14.3 | 0.401 | | Tatoeba-test.eng-nep.eng.nep | 0.5 | 0.002 | | Tatoeba-test.eng-nld.eng.nld | 44.0 | 0.642 | | Tatoeba-test.eng-non.eng.non | 0.7 | 0.118 | | Tatoeba-test.eng-nor.eng.nor | 42.7 | 0.623 | | Tatoeba-test.eng-oci.eng.oci | 7.2 | 0.295 | | Tatoeba-test.eng-ori.eng.ori | 2.7 | 0.257 | | Tatoeba-test.eng-orv.eng.orv | 0.2 | 0.008 | | Tatoeba-test.eng-oss.eng.oss | 2.9 | 0.264 | | Tatoeba-test.eng-pan.eng.pan | 7.4 | 0.337 | | Tatoeba-test.eng-pap.eng.pap | 48.5 | 0.656 | | Tatoeba-test.eng-pdc.eng.pdc | 1.8 | 0.145 | | Tatoeba-test.eng-pms.eng.pms | 0.7 | 0.136 | | Tatoeba-test.eng-pol.eng.pol | 31.1 | 0.563 | | Tatoeba-test.eng-por.eng.por | 37.0 | 0.605 | | Tatoeba-test.eng-prg.eng.prg | 0.2 | 0.100 | | Tatoeba-test.eng-pus.eng.pus | 1.0 | 0.134 | | Tatoeba-test.eng-roh.eng.roh | 2.3 | 0.236 | | Tatoeba-test.eng-rom.eng.rom | 7.8 | 0.340 | | Tatoeba-test.eng-ron.eng.ron | 34.3 | 0.585 | | Tatoeba-test.eng-rue.eng.rue | 0.2 | 0.010 | | Tatoeba-test.eng-rus.eng.rus | 29.6 | 0.526 | | Tatoeba-test.eng-san.eng.san | 2.4 | 0.125 | | Tatoeba-test.eng-scn.eng.scn | 1.6 | 0.079 | | Tatoeba-test.eng-sco.eng.sco | 33.6 | 0.562 | | Tatoeba-test.eng-sgs.eng.sgs | 3.4 | 0.114 | | Tatoeba-test.eng-sin.eng.sin | 9.2 | 0.349 | | Tatoeba-test.eng-slv.eng.slv | 15.6 | 0.334 | | Tatoeba-test.eng-snd.eng.snd | 9.1 | 0.324 | | Tatoeba-test.eng-spa.eng.spa | 43.4 | 0.645 | | Tatoeba-test.eng-sqi.eng.sqi | 39.0 | 0.621 | | Tatoeba-test.eng-stq.eng.stq | 10.8 | 0.373 | | Tatoeba-test.eng-swe.eng.swe | 49.9 | 0.663 | | Tatoeba-test.eng-swg.eng.swg | 0.7 | 0.137 | | Tatoeba-test.eng-tgk.eng.tgk | 6.4 | 0.346 | | Tatoeba-test.eng-tly.eng.tly | 0.5 | 0.055 | | Tatoeba-test.eng-ukr.eng.ukr | 31.4 | 0.536 | | Tatoeba-test.eng-urd.eng.urd | 11.1 | 0.389 | | Tatoeba-test.eng-vec.eng.vec | 1.3 | 0.110 | | Tatoeba-test.eng-wln.eng.wln | 6.8 | 0.233 | | Tatoeba-test.eng-yid.eng.yid | 5.8 | 0.295 | | Tatoeba-test.eng-zza.eng.zza | 0.8 | 0.086 | ### System Info: - hf_name: eng-ine - source_languages: eng - target_languages: ine - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ine/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine'] - src_constituents: {'eng'} - tgt_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos_Latn', 'lad_Latn', 'lat_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm_Latn', 'srd', 'gcf_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur_Latn', 'arg', 'pes_Thaa', 'sqi', 'csb_Latn', 'fra', 'hat', 'non_Latn', 'sco', 'pnb', 'roh', 'bul_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw_Latn', 'hsb', 'tly_Latn', 'bul', 'bel', 'got_Goth', 'lat_Grek', 'ext', 'gla', 'mai', 'sin', 'hif_Latn', 'eng', 'bre', 'nob_Hebr', 'prg_Latn', 'ang_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr_Arab', 'san_Deva', 'gos', 'rus', 'fao', 'orv_Cyrl', 'bel_Latn', 'cos', 'zza', 'grc_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk_Cyrl', 'hye_Latn', 'pdc', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp_Latn', 'zlm_Latn', 'ind', 'rom', 'hye', 'scn', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus_Latn', 'jdt_Cyrl', 'gsw', 'glv', 'nld', 'snd_Arab', 'kur_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm_Latn', 'ksh', 'pan_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld_Latn', 'ces', 'egl', 'vec', 'max_Latn', 'pes_Latn', 'ltg', 'nds'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: ine - short_pair: en-ine - chrF2_score: 0.539 - bleu: 32.6 - brevity_penalty: 0.973 - ref_len: 68664.0 - src_name: English - tgt_name: Indo-European languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: ine - prefer_old: False - long_pair: eng-ine - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-is
2021-01-18T08:09:59.000Z
[ "pytorch", "marian", "seq2seq", "en", "is", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
167
transformers
--- tags: - translation --- ### opus-mt-en-is * source languages: en * target languages: is * OPUS readme: [en-is](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-is/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-is/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-is/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-is/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.is | 25.3 | 0.518 |
Helsinki-NLP/opus-mt-en-iso
2021-01-18T08:10:05.000Z
[ "pytorch", "marian", "seq2seq", "en", "iso", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
58
transformers
--- tags: - translation --- ### opus-mt-en-iso * source languages: en * target languages: iso * OPUS readme: [en-iso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-iso/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-iso/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-iso/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-iso/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.iso | 35.7 | 0.523 |
Helsinki-NLP/opus-mt-en-it
2021-01-18T08:10:10.000Z
[ "pytorch", "marian", "seq2seq", "en", "it", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
3,996
transformers
--- tags: - translation --- ### opus-mt-en-it * source languages: en * target languages: it * OPUS readme: [en-it](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-it/README.md) * dataset: opus * model: transformer * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-04.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-it/opus-2019-12-04.zip) * test set translations: [opus-2019-12-04.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-it/opus-2019-12-04.test.txt) * test set scores: [opus-2019-12-04.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-it/opus-2019-12-04.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009.en.it | 30.9 | 0.606 | | newstest2009.en.it | 31.9 | 0.604 | | Tatoeba.en.it | 48.2 | 0.695 |
Helsinki-NLP/opus-mt-en-itc
2021-01-18T08:10:20.000Z
[ "pytorch", "marian", "seq2seq", "en", "it", "ca", "rm", "es", "ro", "gl", "sc", "co", "wa", "pt", "oc", "an", "id", "fr", "ht", "itc", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
22
transformers
--- language: - en - it - ca - rm - es - ro - gl - sc - co - wa - pt - oc - an - id - fr - ht - itc tags: - translation license: apache-2.0 --- ### eng-itc * source group: English * target group: Italic languages * OPUS readme: [eng-itc](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-itc/README.md) * model: transformer * source language(s): eng * target language(s): arg ast cat cos egl ext fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lat_Latn lij lld_Latn lmo max_Latn mfe min mwl oci pap pms por roh ron scn spa tmw_Latn vec wln zlm_Latn zsm_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2016-enro-engron.eng.ron | 27.1 | 0.565 | | newsdiscussdev2015-enfr-engfra.eng.fra | 29.9 | 0.574 | | newsdiscusstest2015-enfr-engfra.eng.fra | 35.3 | 0.609 | | newssyscomb2009-engfra.eng.fra | 27.7 | 0.567 | | newssyscomb2009-engita.eng.ita | 28.6 | 0.586 | | newssyscomb2009-engspa.eng.spa | 29.8 | 0.569 | | news-test2008-engfra.eng.fra | 25.0 | 0.536 | | news-test2008-engspa.eng.spa | 27.1 | 0.548 | | newstest2009-engfra.eng.fra | 26.7 | 0.557 | | newstest2009-engita.eng.ita | 28.9 | 0.583 | | newstest2009-engspa.eng.spa | 28.9 | 0.567 | | newstest2010-engfra.eng.fra | 29.6 | 0.574 | | newstest2010-engspa.eng.spa | 33.8 | 0.598 | | newstest2011-engfra.eng.fra | 30.9 | 0.590 | | newstest2011-engspa.eng.spa | 34.8 | 0.598 | | newstest2012-engfra.eng.fra | 29.1 | 0.574 | | newstest2012-engspa.eng.spa | 34.9 | 0.600 | | newstest2013-engfra.eng.fra | 30.1 | 0.567 | | newstest2013-engspa.eng.spa | 31.8 | 0.576 | | newstest2016-enro-engron.eng.ron | 25.9 | 0.548 | | Tatoeba-test.eng-arg.eng.arg | 1.6 | 0.120 | | Tatoeba-test.eng-ast.eng.ast | 17.2 | 0.389 | | Tatoeba-test.eng-cat.eng.cat | 47.6 | 0.668 | | Tatoeba-test.eng-cos.eng.cos | 4.3 | 0.287 | | Tatoeba-test.eng-egl.eng.egl | 0.9 | 0.101 | | Tatoeba-test.eng-ext.eng.ext | 8.7 | 0.287 | | Tatoeba-test.eng-fra.eng.fra | 44.9 | 0.635 | | Tatoeba-test.eng-frm.eng.frm | 1.0 | 0.225 | | Tatoeba-test.eng-gcf.eng.gcf | 0.7 | 0.115 | | Tatoeba-test.eng-glg.eng.glg | 44.9 | 0.648 | | Tatoeba-test.eng-hat.eng.hat | 30.9 | 0.533 | | Tatoeba-test.eng-ita.eng.ita | 45.4 | 0.673 | | Tatoeba-test.eng-lad.eng.lad | 5.6 | 0.279 | | Tatoeba-test.eng-lat.eng.lat | 12.1 | 0.380 | | Tatoeba-test.eng-lij.eng.lij | 1.4 | 0.183 | | Tatoeba-test.eng-lld.eng.lld | 0.5 | 0.199 | | Tatoeba-test.eng-lmo.eng.lmo | 0.7 | 0.187 | | Tatoeba-test.eng-mfe.eng.mfe | 83.6 | 0.909 | | Tatoeba-test.eng-msa.eng.msa | 31.3 | 0.549 | | Tatoeba-test.eng.multi | 38.0 | 0.588 | | Tatoeba-test.eng-mwl.eng.mwl | 2.7 | 0.322 | | Tatoeba-test.eng-oci.eng.oci | 8.2 | 0.293 | | Tatoeba-test.eng-pap.eng.pap | 46.7 | 0.663 | | Tatoeba-test.eng-pms.eng.pms | 2.1 | 0.194 | | Tatoeba-test.eng-por.eng.por | 41.2 | 0.635 | | Tatoeba-test.eng-roh.eng.roh | 2.6 | 0.237 | | Tatoeba-test.eng-ron.eng.ron | 40.6 | 0.632 | | Tatoeba-test.eng-scn.eng.scn | 1.6 | 0.181 | | Tatoeba-test.eng-spa.eng.spa | 49.5 | 0.685 | | Tatoeba-test.eng-vec.eng.vec | 1.6 | 0.223 | | Tatoeba-test.eng-wln.eng.wln | 7.1 | 0.250 | ### System Info: - hf_name: eng-itc - source_languages: eng - target_languages: itc - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-itc/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc'] - src_constituents: {'eng'} - tgt_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat_Latn', 'lad_Latn', 'pcd', 'lat_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'srd', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: itc - short_pair: en-itc - chrF2_score: 0.588 - bleu: 38.0 - brevity_penalty: 0.9670000000000001 - ref_len: 73951.0 - src_name: English - tgt_name: Italic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: itc - prefer_old: False - long_pair: eng-itc - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-jap
2021-01-18T08:10:27.000Z
[ "pytorch", "marian", "seq2seq", "en", "jap", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
615
transformers
--- tags: - translation --- ### opus-mt-en-jap * source languages: en * target languages: jap * OPUS readme: [en-jap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-jap/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-jap/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-jap/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-jap/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | bible-uedin.en.jap | 42.1 | 0.960 |
Helsinki-NLP/opus-mt-en-kg
2021-01-18T08:10:34.000Z
[ "pytorch", "marian", "seq2seq", "en", "kg", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
60
transformers
--- tags: - translation --- ### opus-mt-en-kg * source languages: en * target languages: kg * OPUS readme: [en-kg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kg/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kg/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kg/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kg/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.kg | 39.6 | 0.613 |
Helsinki-NLP/opus-mt-en-kj
2021-01-18T08:10:41.000Z
[ "pytorch", "marian", "seq2seq", "en", "kj", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
64
transformers
--- tags: - translation --- ### opus-mt-en-kj * source languages: en * target languages: kj * OPUS readme: [en-kj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kj/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kj/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kj/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kj/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.kj | 29.6 | 0.539 |
Helsinki-NLP/opus-mt-en-kqn
2021-01-18T08:10:47.000Z
[ "pytorch", "marian", "seq2seq", "en", "kqn", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
59
transformers
--- tags: - translation --- ### opus-mt-en-kqn * source languages: en * target languages: kqn * OPUS readme: [en-kqn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kqn/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kqn/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kqn/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kqn/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.kqn | 33.1 | 0.567 |
Helsinki-NLP/opus-mt-en-kwn
2021-01-18T08:10:54.000Z
[ "pytorch", "marian", "seq2seq", "en", "kwn", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
71
transformers
--- tags: - translation --- ### opus-mt-en-kwn * source languages: en * target languages: kwn * OPUS readme: [en-kwn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kwn/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kwn/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwn/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwn/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.kwn | 27.6 | 0.513 |
Helsinki-NLP/opus-mt-en-kwy
2021-01-18T08:11:01.000Z
[ "pytorch", "marian", "seq2seq", "en", "kwy", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
59
transformers
--- tags: - translation --- ### opus-mt-en-kwy * source languages: en * target languages: kwy * OPUS readme: [en-kwy](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kwy/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kwy/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwy/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwy/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.kwy | 33.6 | 0.543 |
Helsinki-NLP/opus-mt-en-lg
2021-01-18T08:11:08.000Z
[ "pytorch", "marian", "seq2seq", "en", "lg", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
103
transformers
--- tags: - translation --- ### opus-mt-en-lg * source languages: en * target languages: lg * OPUS readme: [en-lg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lg/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lg/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lg/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lg/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.lg | 30.4 | 0.543 | | Tatoeba.en.lg | 5.7 | 0.386 |
Helsinki-NLP/opus-mt-en-ln
2021-01-18T08:11:14.000Z
[ "pytorch", "marian", "seq2seq", "en", "ln", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
62
transformers
--- tags: - translation --- ### opus-mt-en-ln * source languages: en * target languages: ln * OPUS readme: [en-ln](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ln/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ln/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ln/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ln/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ln | 36.7 | 0.588 |
Helsinki-NLP/opus-mt-en-loz
2021-01-18T08:11:21.000Z
[ "pytorch", "marian", "seq2seq", "en", "loz", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
41
transformers
--- tags: - translation --- ### opus-mt-en-loz * source languages: en * target languages: loz * OPUS readme: [en-loz](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-loz/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-loz/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-loz/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-loz/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.loz | 40.1 | 0.596 |
Helsinki-NLP/opus-mt-en-lu
2021-01-18T08:11:27.000Z
[ "pytorch", "marian", "seq2seq", "en", "lu", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
56
transformers
--- tags: - translation --- ### opus-mt-en-lu * source languages: en * target languages: lu * OPUS readme: [en-lu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lu/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lu/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lu/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.lu | 34.1 | 0.564 |
Helsinki-NLP/opus-mt-en-lua
2021-01-18T08:11:34.000Z
[ "pytorch", "marian", "seq2seq", "en", "lua", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
54
transformers
--- tags: - translation --- ### opus-mt-en-lua * source languages: en * target languages: lua * OPUS readme: [en-lua](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lua/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lua/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lua/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lua/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.lua | 35.3 | 0.578 |
Helsinki-NLP/opus-mt-en-lue
2021-01-18T08:11:41.000Z
[ "pytorch", "marian", "seq2seq", "en", "lue", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
57
transformers
--- tags: - translation --- ### opus-mt-en-lue * source languages: en * target languages: lue * OPUS readme: [en-lue](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lue/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lue/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lue/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lue/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.lue | 30.1 | 0.558 |
Helsinki-NLP/opus-mt-en-lun
2021-01-18T08:11:47.000Z
[ "pytorch", "marian", "seq2seq", "en", "lun", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
50
transformers
--- tags: - translation --- ### opus-mt-en-lun * source languages: en * target languages: lun * OPUS readme: [en-lun](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lun/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lun/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lun/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lun/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.lun | 28.9 | 0.552 |
Helsinki-NLP/opus-mt-en-luo
2021-01-18T08:11:53.000Z
[ "pytorch", "marian", "seq2seq", "en", "luo", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
55
transformers
--- tags: - translation --- ### opus-mt-en-luo * source languages: en * target languages: luo * OPUS readme: [en-luo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-luo/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-luo/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-luo/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-luo/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.luo | 27.6 | 0.495 |
Helsinki-NLP/opus-mt-en-lus
2021-01-18T08:12:02.000Z
[ "pytorch", "marian", "seq2seq", "en", "lus", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
152
transformers
--- tags: - translation --- ### opus-mt-en-lus * source languages: en * target languages: lus * OPUS readme: [en-lus](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lus/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lus/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lus/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lus/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.lus | 36.8 | 0.581 |
Helsinki-NLP/opus-mt-en-map
2021-01-18T08:12:08.000Z
[ "pytorch", "marian", "seq2seq", "en", "map", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
42
transformers
--- language: - en - map tags: - translation license: apache-2.0 --- ### eng-map * source group: English * target group: Austronesian languages * OPUS readme: [eng-map](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-map/README.md) * model: transformer * source language(s): eng * target language(s): akl_Latn ceb cha dtp fij gil haw hil iba ilo ind jav jav_Java lkt mad mah max_Latn min mlg mri nau niu pag pau rap smo sun tah tet tmw_Latn ton tvl war zlm_Latn zsm_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.zip) * test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.test.txt) * test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-akl.eng.akl | 2.2 | 0.103 | | Tatoeba-test.eng-ceb.eng.ceb | 10.7 | 0.425 | | Tatoeba-test.eng-cha.eng.cha | 3.2 | 0.201 | | Tatoeba-test.eng-dtp.eng.dtp | 0.5 | 0.120 | | Tatoeba-test.eng-fij.eng.fij | 26.8 | 0.453 | | Tatoeba-test.eng-gil.eng.gil | 59.3 | 0.762 | | Tatoeba-test.eng-haw.eng.haw | 1.0 | 0.116 | | Tatoeba-test.eng-hil.eng.hil | 19.0 | 0.517 | | Tatoeba-test.eng-iba.eng.iba | 15.5 | 0.400 | | Tatoeba-test.eng-ilo.eng.ilo | 33.6 | 0.591 | | Tatoeba-test.eng-jav.eng.jav | 7.8 | 0.301 | | Tatoeba-test.eng-lkt.eng.lkt | 1.0 | 0.064 | | Tatoeba-test.eng-mad.eng.mad | 1.1 | 0.142 | | Tatoeba-test.eng-mah.eng.mah | 9.1 | 0.374 | | Tatoeba-test.eng-mlg.eng.mlg | 35.4 | 0.526 | | Tatoeba-test.eng-mri.eng.mri | 7.6 | 0.309 | | Tatoeba-test.eng-msa.eng.msa | 31.1 | 0.565 | | Tatoeba-test.eng.multi | 17.6 | 0.411 | | Tatoeba-test.eng-nau.eng.nau | 1.4 | 0.098 | | Tatoeba-test.eng-niu.eng.niu | 40.1 | 0.560 | | Tatoeba-test.eng-pag.eng.pag | 16.8 | 0.526 | | Tatoeba-test.eng-pau.eng.pau | 1.9 | 0.139 | | Tatoeba-test.eng-rap.eng.rap | 2.7 | 0.090 | | Tatoeba-test.eng-smo.eng.smo | 24.9 | 0.453 | | Tatoeba-test.eng-sun.eng.sun | 33.2 | 0.439 | | Tatoeba-test.eng-tah.eng.tah | 12.5 | 0.278 | | Tatoeba-test.eng-tet.eng.tet | 1.6 | 0.140 | | Tatoeba-test.eng-ton.eng.ton | 25.8 | 0.530 | | Tatoeba-test.eng-tvl.eng.tvl | 31.1 | 0.523 | | Tatoeba-test.eng-war.eng.war | 12.8 | 0.436 | ### System Info: - hf_name: eng-map - source_languages: eng - target_languages: map - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-map/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'map'] - src_constituents: {'eng'} - tgt_constituents: set() - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.test.txt - src_alpha3: eng - tgt_alpha3: map - short_pair: en-map - chrF2_score: 0.41100000000000003 - bleu: 17.6 - brevity_penalty: 1.0 - ref_len: 66963.0 - src_name: English - tgt_name: Austronesian languages - train_date: 2020-07-27 - src_alpha2: en - tgt_alpha2: map - prefer_old: False - long_pair: eng-map - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-mfe
2021-01-18T08:12:14.000Z
[ "pytorch", "marian", "seq2seq", "en", "mfe", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
55
transformers
--- tags: - translation --- ### opus-mt-en-mfe * source languages: en * target languages: mfe * OPUS readme: [en-mfe](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mfe/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mfe/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mfe/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mfe/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.mfe | 32.1 | 0.509 |
Helsinki-NLP/opus-mt-en-mg
2021-01-18T08:12:19.000Z
[ "pytorch", "marian", "seq2seq", "en", "mg", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
69
transformers
--- tags: - translation --- ### opus-mt-en-mg * source languages: en * target languages: mg * OPUS readme: [en-mg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mg/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mg/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mg/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mg/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | GlobalVoices.en.mg | 22.3 | 0.565 | | Tatoeba.en.mg | 35.5 | 0.548 |
Helsinki-NLP/opus-mt-en-mh
2021-01-18T08:12:26.000Z
[ "pytorch", "marian", "seq2seq", "en", "mh", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
65
transformers
--- tags: - translation --- ### opus-mt-en-mh * source languages: en * target languages: mh * OPUS readme: [en-mh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mh/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mh/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mh/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mh/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.mh | 29.7 | 0.479 |
Helsinki-NLP/opus-mt-en-mk
2021-01-18T08:12:32.000Z
[ "pytorch", "marian", "seq2seq", "en", "mk", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
348
transformers
--- tags: - translation --- ### opus-mt-en-mk * source languages: en * target languages: mk * OPUS readme: [en-mk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mk/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mk/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mk/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mk/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.mk | 52.1 | 0.683 |
Helsinki-NLP/opus-mt-en-mkh
2021-01-18T08:12:39.000Z
[ "pytorch", "marian", "seq2seq", "en", "vi", "km", "mkh", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
43
transformers
--- language: - en - vi - km - mkh tags: - translation license: apache-2.0 --- ### eng-mkh * source group: English * target group: Mon-Khmer languages * OPUS readme: [eng-mkh](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-mkh/README.md) * model: transformer * source language(s): eng * target language(s): kha khm khm_Latn mnw vie vie_Hani * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.zip) * test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.test.txt) * test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-kha.eng.kha | 0.1 | 0.015 | | Tatoeba-test.eng-khm.eng.khm | 0.2 | 0.226 | | Tatoeba-test.eng-mnw.eng.mnw | 0.7 | 0.003 | | Tatoeba-test.eng.multi | 16.5 | 0.330 | | Tatoeba-test.eng-vie.eng.vie | 33.7 | 0.513 | ### System Info: - hf_name: eng-mkh - source_languages: eng - target_languages: mkh - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-mkh/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'vi', 'km', 'mkh'] - src_constituents: {'eng'} - tgt_constituents: {'vie_Hani', 'mnw', 'vie', 'kha', 'khm_Latn', 'khm'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.test.txt - src_alpha3: eng - tgt_alpha3: mkh - short_pair: en-mkh - chrF2_score: 0.33 - bleu: 16.5 - brevity_penalty: 1.0 - ref_len: 34734.0 - src_name: English - tgt_name: Mon-Khmer languages - train_date: 2020-07-27 - src_alpha2: en - tgt_alpha2: mkh - prefer_old: False - long_pair: eng-mkh - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-ml
2021-01-18T08:12:46.000Z
[ "pytorch", "marian", "seq2seq", "en", "ml", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
132
transformers
--- tags: - translation --- ### opus-mt-en-ml * source languages: en * target languages: ml * OPUS readme: [en-ml](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ml/README.md) * dataset: opus+bt+bt * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus+bt+bt-2020-04-28.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ml/opus+bt+bt-2020-04-28.zip) * test set translations: [opus+bt+bt-2020-04-28.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ml/opus+bt+bt-2020-04-28.test.txt) * test set scores: [opus+bt+bt-2020-04-28.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ml/opus+bt+bt-2020-04-28.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.ml | 19.1 | 0.536 |
Helsinki-NLP/opus-mt-en-mos
2021-01-18T08:12:52.000Z
[ "pytorch", "marian", "seq2seq", "en", "mos", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
60
transformers
--- tags: - translation --- ### opus-mt-en-mos * source languages: en * target languages: mos * OPUS readme: [en-mos](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mos/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mos/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mos/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mos/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.mos | 26.9 | 0.417 |
Helsinki-NLP/opus-mt-en-mr
2021-01-18T08:12:58.000Z
[ "pytorch", "marian", "seq2seq", "en", "mr", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
119
transformers
--- tags: - translation --- ### opus-mt-en-mr * source languages: en * target languages: mr * OPUS readme: [en-mr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mr/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mr/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mr/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.mr | 22.0 | 0.397 |
Helsinki-NLP/opus-mt-en-mt
2021-01-18T08:13:04.000Z
[ "pytorch", "marian", "seq2seq", "en", "mt", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
68
transformers
--- tags: - translation --- ### opus-mt-en-mt * source languages: en * target languages: mt * OPUS readme: [en-mt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mt/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mt/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mt/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mt/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.mt | 47.5 | 0.640 | | Tatoeba.en.mt | 25.0 | 0.620 |
Helsinki-NLP/opus-mt-en-mul
2021-01-18T08:13:09.000Z
[ "pytorch", "marian", "seq2seq", "en", "ca", "es", "os", "eo", "ro", "fy", "cy", "is", "lb", "su", "an", "sq", "fr", "ht", "rm", "cv", "ig", "am", "eu", "tr", "ps", "af", "ny", "ch", "uk", "sl", "lt", "tk", "sg", "ar", "lg", "bg", "be", "ka", "gd", "ja", "si", "br", "mh", "km", "th", "ty", "rw", "te", "mk", "or", "wo", "kl", "mr", "ru", "yo", "hu", "fo", "zh", "ti", "co", "ee", "oc", "sn", "mt", "ts", "pl", "gl", "nb", "bn", "tt", "bo", "lo", "id", "gn", "nv", "hy", "kn", "to", "io", "so", "vi", "da", "fj", "gv", "sm", "nl", "mi", "pt", "hi", "se", "as", "ta", "et", "kw", "ga", "sv", "ln", "na", "mn", "gu", "wa", "lv", "jv", "el", "my", "ba", "it", "hr", "ur", "ce", "nn", "fi", "mg", "rn", "xh", "ab", "de", "cs", "he", "zu", "yi", "ml", "mul", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
399
transformers
--- language: - en - ca - es - os - eo - ro - fy - cy - is - lb - su - an - sq - fr - ht - rm - cv - ig - am - eu - tr - ps - af - ny - ch - uk - sl - lt - tk - sg - ar - lg - bg - be - ka - gd - ja - si - br - mh - km - th - ty - rw - te - mk - or - wo - kl - mr - ru - yo - hu - fo - zh - ti - co - ee - oc - sn - mt - ts - pl - gl - nb - bn - tt - bo - lo - id - gn - nv - hy - kn - to - io - so - vi - da - fj - gv - sm - nl - mi - pt - hi - se - as - ta - et - kw - ga - sv - ln - na - mn - gu - wa - lv - jv - el - my - ba - it - hr - ur - ce - nn - fi - mg - rn - xh - ab - de - cs - he - zu - yi - ml - mul tags: - translation license: apache-2.0 --- ### eng-mul * source group: English * target group: Multiple languages * OPUS readme: [eng-mul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-mul/README.md) * model: transformer * source language(s): eng * target language(s): abk acm ady afb afh_Latn afr akl_Latn aln amh ang_Latn apc ara arg arq ary arz asm ast avk_Latn awa aze_Latn bak bam_Latn bel bel_Latn ben bho bod bos_Latn bre brx brx_Latn bul bul_Latn cat ceb ces cha che chr chv cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant cor cos crh crh_Latn csb_Latn cym dan deu dsb dtp dws_Latn egl ell enm_Latn epo est eus ewe ext fao fij fin fkv_Latn fra frm_Latn frr fry fuc fuv gan gcf_Latn gil gla gle glg glv gom gos got_Goth grc_Grek grn gsw guj hat hau_Latn haw heb hif_Latn hil hin hnj_Latn hoc hoc_Latn hrv hsb hun hye iba ibo ido ido_Latn ike_Latn ile_Latn ilo ina_Latn ind isl ita izh jav jav_Java jbo jbo_Cyrl jbo_Latn jdt_Cyrl jpn kab kal kan kat kaz_Cyrl kaz_Latn kek_Latn kha khm khm_Latn kin kir_Cyrl kjh kpv krl ksh kum kur_Arab kur_Latn lad lad_Latn lao lat_Latn lav ldn_Latn lfn_Cyrl lfn_Latn lij lin lit liv_Latn lkt lld_Latn lmo ltg ltz lug lzh lzh_Hans mad mah mai mal mar max_Latn mdf mfe mhr mic min mkd mlg mlt mnw moh mon mri mwl mww mya myv nan nau nav nds niu nld nno nob nob_Hebr nog non_Latn nov_Latn npi nya oci ori orv_Cyrl oss ota_Arab ota_Latn pag pan_Guru pap pau pdc pes pes_Latn pes_Thaa pms pnb pol por ppl_Latn prg_Latn pus quc qya qya_Latn rap rif_Latn roh rom ron rue run rus sag sah san_Deva scn sco sgs shs_Latn shy_Latn sin sjn_Latn slv sma sme smo sna snd_Arab som spa sqi srp_Cyrl srp_Latn stq sun swe swg swh tah tam tat tat_Arab tat_Latn tel tet tgk_Cyrl tha tir tlh_Latn tly_Latn tmw_Latn toi_Latn ton tpw_Latn tso tuk tuk_Latn tur tvl tyv tzl tzl_Latn udm uig_Arab uig_Cyrl ukr umb urd uzb_Cyrl uzb_Latn vec vie vie_Hani vol_Latn vro war wln wol wuu xal xho yid yor yue yue_Hans yue_Hant zho zho_Hans zho_Hant zlm_Latn zsm_Latn zul zza * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mul/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mul/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mul/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014-enghin.eng.hin | 5.0 | 0.288 | | newsdev2015-enfi-engfin.eng.fin | 9.3 | 0.418 | | newsdev2016-enro-engron.eng.ron | 17.2 | 0.488 | | newsdev2016-entr-engtur.eng.tur | 8.2 | 0.402 | | newsdev2017-enlv-englav.eng.lav | 12.9 | 0.444 | | newsdev2017-enzh-engzho.eng.zho | 17.6 | 0.170 | | newsdev2018-enet-engest.eng.est | 10.9 | 0.423 | | newsdev2019-engu-engguj.eng.guj | 5.2 | 0.284 | | newsdev2019-enlt-englit.eng.lit | 11.0 | 0.431 | | newsdiscussdev2015-enfr-engfra.eng.fra | 22.6 | 0.521 | | newsdiscusstest2015-enfr-engfra.eng.fra | 25.9 | 0.546 | | newssyscomb2009-engces.eng.ces | 10.3 | 0.394 | | newssyscomb2009-engdeu.eng.deu | 13.3 | 0.459 | | newssyscomb2009-engfra.eng.fra | 21.5 | 0.522 | | newssyscomb2009-enghun.eng.hun | 8.1 | 0.371 | | newssyscomb2009-engita.eng.ita | 22.1 | 0.540 | | newssyscomb2009-engspa.eng.spa | 23.8 | 0.531 | | news-test2008-engces.eng.ces | 9.0 | 0.376 | | news-test2008-engdeu.eng.deu | 14.2 | 0.451 | | news-test2008-engfra.eng.fra | 19.8 | 0.500 | | news-test2008-engspa.eng.spa | 22.8 | 0.518 | | newstest2009-engces.eng.ces | 9.8 | 0.392 | | newstest2009-engdeu.eng.deu | 13.7 | 0.454 | | newstest2009-engfra.eng.fra | 20.7 | 0.514 | | newstest2009-enghun.eng.hun | 8.4 | 0.370 | | newstest2009-engita.eng.ita | 22.4 | 0.538 | | newstest2009-engspa.eng.spa | 23.5 | 0.532 | | newstest2010-engces.eng.ces | 10.0 | 0.393 | | newstest2010-engdeu.eng.deu | 15.2 | 0.463 | | newstest2010-engfra.eng.fra | 22.0 | 0.524 | | newstest2010-engspa.eng.spa | 27.2 | 0.556 | | newstest2011-engces.eng.ces | 10.8 | 0.392 | | newstest2011-engdeu.eng.deu | 14.2 | 0.449 | | newstest2011-engfra.eng.fra | 24.3 | 0.544 | | newstest2011-engspa.eng.spa | 28.3 | 0.559 | | newstest2012-engces.eng.ces | 9.9 | 0.377 | | newstest2012-engdeu.eng.deu | 14.3 | 0.449 | | newstest2012-engfra.eng.fra | 23.2 | 0.530 | | newstest2012-engrus.eng.rus | 16.0 | 0.463 | | newstest2012-engspa.eng.spa | 27.8 | 0.555 | | newstest2013-engces.eng.ces | 11.0 | 0.392 | | newstest2013-engdeu.eng.deu | 16.4 | 0.469 | | newstest2013-engfra.eng.fra | 22.6 | 0.515 | | newstest2013-engrus.eng.rus | 12.1 | 0.414 | | newstest2013-engspa.eng.spa | 24.9 | 0.532 | | newstest2014-hien-enghin.eng.hin | 7.2 | 0.311 | | newstest2015-encs-engces.eng.ces | 10.9 | 0.396 | | newstest2015-ende-engdeu.eng.deu | 18.3 | 0.490 | | newstest2015-enfi-engfin.eng.fin | 10.1 | 0.421 | | newstest2015-enru-engrus.eng.rus | 14.5 | 0.445 | | newstest2016-encs-engces.eng.ces | 12.2 | 0.408 | | newstest2016-ende-engdeu.eng.deu | 21.4 | 0.517 | | newstest2016-enfi-engfin.eng.fin | 11.2 | 0.435 | | newstest2016-enro-engron.eng.ron | 16.6 | 0.472 | | newstest2016-enru-engrus.eng.rus | 13.4 | 0.435 | | newstest2016-entr-engtur.eng.tur | 8.1 | 0.385 | | newstest2017-encs-engces.eng.ces | 9.6 | 0.377 | | newstest2017-ende-engdeu.eng.deu | 17.9 | 0.482 | | newstest2017-enfi-engfin.eng.fin | 11.8 | 0.440 | | newstest2017-enlv-englav.eng.lav | 9.6 | 0.412 | | newstest2017-enru-engrus.eng.rus | 14.1 | 0.446 | | newstest2017-entr-engtur.eng.tur | 8.0 | 0.378 | | newstest2017-enzh-engzho.eng.zho | 16.8 | 0.175 | | newstest2018-encs-engces.eng.ces | 9.8 | 0.380 | | newstest2018-ende-engdeu.eng.deu | 23.8 | 0.536 | | newstest2018-enet-engest.eng.est | 11.8 | 0.433 | | newstest2018-enfi-engfin.eng.fin | 7.8 | 0.398 | | newstest2018-enru-engrus.eng.rus | 12.2 | 0.434 | | newstest2018-entr-engtur.eng.tur | 7.5 | 0.383 | | newstest2018-enzh-engzho.eng.zho | 18.3 | 0.179 | | newstest2019-encs-engces.eng.ces | 10.7 | 0.389 | | newstest2019-ende-engdeu.eng.deu | 21.0 | 0.512 | | newstest2019-enfi-engfin.eng.fin | 10.4 | 0.420 | | newstest2019-engu-engguj.eng.guj | 5.8 | 0.297 | | newstest2019-enlt-englit.eng.lit | 8.0 | 0.388 | | newstest2019-enru-engrus.eng.rus | 13.0 | 0.415 | | newstest2019-enzh-engzho.eng.zho | 15.0 | 0.192 | | newstestB2016-enfi-engfin.eng.fin | 9.0 | 0.414 | | newstestB2017-enfi-engfin.eng.fin | 9.5 | 0.415 | | Tatoeba-test.eng-abk.eng.abk | 4.2 | 0.275 | | Tatoeba-test.eng-ady.eng.ady | 0.4 | 0.006 | | Tatoeba-test.eng-afh.eng.afh | 1.0 | 0.058 | | Tatoeba-test.eng-afr.eng.afr | 47.0 | 0.663 | | Tatoeba-test.eng-akl.eng.akl | 2.7 | 0.080 | | Tatoeba-test.eng-amh.eng.amh | 8.5 | 0.455 | | Tatoeba-test.eng-ang.eng.ang | 6.2 | 0.138 | | Tatoeba-test.eng-ara.eng.ara | 6.3 | 0.325 | | Tatoeba-test.eng-arg.eng.arg | 1.5 | 0.107 | | Tatoeba-test.eng-asm.eng.asm | 2.1 | 0.265 | | Tatoeba-test.eng-ast.eng.ast | 15.7 | 0.393 | | Tatoeba-test.eng-avk.eng.avk | 0.2 | 0.095 | | Tatoeba-test.eng-awa.eng.awa | 0.1 | 0.002 | | Tatoeba-test.eng-aze.eng.aze | 19.0 | 0.500 | | Tatoeba-test.eng-bak.eng.bak | 12.7 | 0.379 | | Tatoeba-test.eng-bam.eng.bam | 8.3 | 0.037 | | Tatoeba-test.eng-bel.eng.bel | 13.5 | 0.396 | | Tatoeba-test.eng-ben.eng.ben | 10.0 | 0.383 | | Tatoeba-test.eng-bho.eng.bho | 0.1 | 0.003 | | Tatoeba-test.eng-bod.eng.bod | 0.0 | 0.147 | | Tatoeba-test.eng-bre.eng.bre | 7.6 | 0.275 | | Tatoeba-test.eng-brx.eng.brx | 0.8 | 0.060 | | Tatoeba-test.eng-bul.eng.bul | 32.1 | 0.542 | | Tatoeba-test.eng-cat.eng.cat | 37.0 | 0.595 | | Tatoeba-test.eng-ceb.eng.ceb | 9.6 | 0.409 | | Tatoeba-test.eng-ces.eng.ces | 24.0 | 0.475 | | Tatoeba-test.eng-cha.eng.cha | 3.9 | 0.228 | | Tatoeba-test.eng-che.eng.che | 0.7 | 0.013 | | Tatoeba-test.eng-chm.eng.chm | 2.6 | 0.212 | | Tatoeba-test.eng-chr.eng.chr | 6.0 | 0.190 | | Tatoeba-test.eng-chv.eng.chv | 6.5 | 0.369 | | Tatoeba-test.eng-cor.eng.cor | 0.9 | 0.086 | | Tatoeba-test.eng-cos.eng.cos | 4.2 | 0.174 | | Tatoeba-test.eng-crh.eng.crh | 9.9 | 0.361 | | Tatoeba-test.eng-csb.eng.csb | 3.4 | 0.230 | | Tatoeba-test.eng-cym.eng.cym | 18.0 | 0.418 | | Tatoeba-test.eng-dan.eng.dan | 42.5 | 0.624 | | Tatoeba-test.eng-deu.eng.deu | 25.2 | 0.505 | | Tatoeba-test.eng-dsb.eng.dsb | 0.9 | 0.121 | | Tatoeba-test.eng-dtp.eng.dtp | 0.3 | 0.084 | | Tatoeba-test.eng-dws.eng.dws | 0.2 | 0.040 | | Tatoeba-test.eng-egl.eng.egl | 0.4 | 0.085 | | Tatoeba-test.eng-ell.eng.ell | 28.7 | 0.543 | | Tatoeba-test.eng-enm.eng.enm | 3.3 | 0.295 | | Tatoeba-test.eng-epo.eng.epo | 33.4 | 0.570 | | Tatoeba-test.eng-est.eng.est | 30.3 | 0.545 | | Tatoeba-test.eng-eus.eng.eus | 18.5 | 0.486 | | Tatoeba-test.eng-ewe.eng.ewe | 6.8 | 0.272 | | Tatoeba-test.eng-ext.eng.ext | 5.0 | 0.228 | | Tatoeba-test.eng-fao.eng.fao | 5.2 | 0.277 | | Tatoeba-test.eng-fas.eng.fas | 6.9 | 0.265 | | Tatoeba-test.eng-fij.eng.fij | 31.5 | 0.365 | | Tatoeba-test.eng-fin.eng.fin | 18.5 | 0.459 | | Tatoeba-test.eng-fkv.eng.fkv | 0.9 | 0.132 | | Tatoeba-test.eng-fra.eng.fra | 31.5 | 0.546 | | Tatoeba-test.eng-frm.eng.frm | 0.9 | 0.128 | | Tatoeba-test.eng-frr.eng.frr | 3.0 | 0.025 | | Tatoeba-test.eng-fry.eng.fry | 14.4 | 0.387 | | Tatoeba-test.eng-ful.eng.ful | 0.4 | 0.061 | | Tatoeba-test.eng-gcf.eng.gcf | 0.3 | 0.075 | | Tatoeba-test.eng-gil.eng.gil | 47.4 | 0.706 | | Tatoeba-test.eng-gla.eng.gla | 10.9 | 0.341 | | Tatoeba-test.eng-gle.eng.gle | 26.8 | 0.493 | | Tatoeba-test.eng-glg.eng.glg | 32.5 | 0.565 | | Tatoeba-test.eng-glv.eng.glv | 21.5 | 0.395 | | Tatoeba-test.eng-gos.eng.gos | 0.3 | 0.124 | | Tatoeba-test.eng-got.eng.got | 0.2 | 0.010 | | Tatoeba-test.eng-grc.eng.grc | 0.0 | 0.005 | | Tatoeba-test.eng-grn.eng.grn | 1.5 | 0.129 | | Tatoeba-test.eng-gsw.eng.gsw | 0.6 | 0.106 | | Tatoeba-test.eng-guj.eng.guj | 15.4 | 0.347 | | Tatoeba-test.eng-hat.eng.hat | 31.1 | 0.527 | | Tatoeba-test.eng-hau.eng.hau | 6.5 | 0.385 | | Tatoeba-test.eng-haw.eng.haw | 0.2 | 0.066 | | Tatoeba-test.eng-hbs.eng.hbs | 28.7 | 0.531 | | Tatoeba-test.eng-heb.eng.heb | 21.3 | 0.443 | | Tatoeba-test.eng-hif.eng.hif | 2.8 | 0.268 | | Tatoeba-test.eng-hil.eng.hil | 12.0 | 0.463 | | Tatoeba-test.eng-hin.eng.hin | 13.0 | 0.401 | | Tatoeba-test.eng-hmn.eng.hmn | 0.2 | 0.073 | | Tatoeba-test.eng-hoc.eng.hoc | 0.2 | 0.077 | | Tatoeba-test.eng-hsb.eng.hsb | 5.7 | 0.308 | | Tatoeba-test.eng-hun.eng.hun | 17.1 | 0.431 | | Tatoeba-test.eng-hye.eng.hye | 15.0 | 0.378 | | Tatoeba-test.eng-iba.eng.iba | 16.0 | 0.437 | | Tatoeba-test.eng-ibo.eng.ibo | 2.9 | 0.221 | | Tatoeba-test.eng-ido.eng.ido | 11.5 | 0.403 | | Tatoeba-test.eng-iku.eng.iku | 2.3 | 0.089 | | Tatoeba-test.eng-ile.eng.ile | 4.3 | 0.282 | | Tatoeba-test.eng-ilo.eng.ilo | 26.4 | 0.522 | | Tatoeba-test.eng-ina.eng.ina | 20.9 | 0.493 | | Tatoeba-test.eng-isl.eng.isl | 12.5 | 0.375 | | Tatoeba-test.eng-ita.eng.ita | 33.9 | 0.592 | | Tatoeba-test.eng-izh.eng.izh | 4.6 | 0.050 | | Tatoeba-test.eng-jav.eng.jav | 7.8 | 0.328 | | Tatoeba-test.eng-jbo.eng.jbo | 0.1 | 0.123 | | Tatoeba-test.eng-jdt.eng.jdt | 6.4 | 0.008 | | Tatoeba-test.eng-jpn.eng.jpn | 0.0 | 0.000 | | Tatoeba-test.eng-kab.eng.kab | 5.9 | 0.261 | | Tatoeba-test.eng-kal.eng.kal | 13.4 | 0.382 | | Tatoeba-test.eng-kan.eng.kan | 4.8 | 0.358 | | Tatoeba-test.eng-kat.eng.kat | 1.8 | 0.115 | | Tatoeba-test.eng-kaz.eng.kaz | 8.8 | 0.354 | | Tatoeba-test.eng-kek.eng.kek | 3.7 | 0.188 | | Tatoeba-test.eng-kha.eng.kha | 0.5 | 0.094 | | Tatoeba-test.eng-khm.eng.khm | 0.4 | 0.243 | | Tatoeba-test.eng-kin.eng.kin | 5.2 | 0.362 | | Tatoeba-test.eng-kir.eng.kir | 17.2 | 0.416 | | Tatoeba-test.eng-kjh.eng.kjh | 0.6 | 0.009 | | Tatoeba-test.eng-kok.eng.kok | 5.5 | 0.005 | | Tatoeba-test.eng-kom.eng.kom | 2.4 | 0.012 | | Tatoeba-test.eng-krl.eng.krl | 2.0 | 0.099 | | Tatoeba-test.eng-ksh.eng.ksh | 0.4 | 0.074 | | Tatoeba-test.eng-kum.eng.kum | 0.9 | 0.007 | | Tatoeba-test.eng-kur.eng.kur | 9.1 | 0.174 | | Tatoeba-test.eng-lad.eng.lad | 1.2 | 0.154 | | Tatoeba-test.eng-lah.eng.lah | 0.1 | 0.001 | | Tatoeba-test.eng-lao.eng.lao | 0.6 | 0.426 | | Tatoeba-test.eng-lat.eng.lat | 8.2 | 0.366 | | Tatoeba-test.eng-lav.eng.lav | 20.4 | 0.475 | | Tatoeba-test.eng-ldn.eng.ldn | 0.3 | 0.059 | | Tatoeba-test.eng-lfn.eng.lfn | 0.5 | 0.104 | | Tatoeba-test.eng-lij.eng.lij | 0.2 | 0.094 | | Tatoeba-test.eng-lin.eng.lin | 1.2 | 0.276 | | Tatoeba-test.eng-lit.eng.lit | 17.4 | 0.488 | | Tatoeba-test.eng-liv.eng.liv | 0.3 | 0.039 | | Tatoeba-test.eng-lkt.eng.lkt | 0.3 | 0.041 | | Tatoeba-test.eng-lld.eng.lld | 0.1 | 0.083 | | Tatoeba-test.eng-lmo.eng.lmo | 1.4 | 0.154 | | Tatoeba-test.eng-ltz.eng.ltz | 19.1 | 0.395 | | Tatoeba-test.eng-lug.eng.lug | 4.2 | 0.382 | | Tatoeba-test.eng-mad.eng.mad | 2.1 | 0.075 | | Tatoeba-test.eng-mah.eng.mah | 9.5 | 0.331 | | Tatoeba-test.eng-mai.eng.mai | 9.3 | 0.372 | | Tatoeba-test.eng-mal.eng.mal | 8.3 | 0.437 | | Tatoeba-test.eng-mar.eng.mar | 13.5 | 0.410 | | Tatoeba-test.eng-mdf.eng.mdf | 2.3 | 0.008 | | Tatoeba-test.eng-mfe.eng.mfe | 83.6 | 0.905 | | Tatoeba-test.eng-mic.eng.mic | 7.6 | 0.214 | | Tatoeba-test.eng-mkd.eng.mkd | 31.8 | 0.540 | | Tatoeba-test.eng-mlg.eng.mlg | 31.3 | 0.464 | | Tatoeba-test.eng-mlt.eng.mlt | 11.7 | 0.427 | | Tatoeba-test.eng-mnw.eng.mnw | 0.1 | 0.000 | | Tatoeba-test.eng-moh.eng.moh | 0.6 | 0.067 | | Tatoeba-test.eng-mon.eng.mon | 8.5 | 0.323 | | Tatoeba-test.eng-mri.eng.mri | 8.5 | 0.320 | | Tatoeba-test.eng-msa.eng.msa | 24.5 | 0.498 | | Tatoeba-test.eng.multi | 22.4 | 0.451 | | Tatoeba-test.eng-mwl.eng.mwl | 3.8 | 0.169 | | Tatoeba-test.eng-mya.eng.mya | 0.2 | 0.123 | | Tatoeba-test.eng-myv.eng.myv | 1.1 | 0.014 | | Tatoeba-test.eng-nau.eng.nau | 0.6 | 0.109 | | Tatoeba-test.eng-nav.eng.nav | 1.8 | 0.149 | | Tatoeba-test.eng-nds.eng.nds | 11.3 | 0.365 | | Tatoeba-test.eng-nep.eng.nep | 0.5 | 0.004 | | Tatoeba-test.eng-niu.eng.niu | 34.4 | 0.501 | | Tatoeba-test.eng-nld.eng.nld | 37.6 | 0.598 | | Tatoeba-test.eng-nog.eng.nog | 0.2 | 0.010 | | Tatoeba-test.eng-non.eng.non | 0.2 | 0.096 | | Tatoeba-test.eng-nor.eng.nor | 36.3 | 0.577 | | Tatoeba-test.eng-nov.eng.nov | 0.9 | 0.180 | | Tatoeba-test.eng-nya.eng.nya | 9.8 | 0.524 | | Tatoeba-test.eng-oci.eng.oci | 6.3 | 0.288 | | Tatoeba-test.eng-ori.eng.ori | 5.3 | 0.273 | | Tatoeba-test.eng-orv.eng.orv | 0.2 | 0.007 | | Tatoeba-test.eng-oss.eng.oss | 3.0 | 0.230 | | Tatoeba-test.eng-ota.eng.ota | 0.2 | 0.053 | | Tatoeba-test.eng-pag.eng.pag | 20.2 | 0.513 | | Tatoeba-test.eng-pan.eng.pan | 6.4 | 0.301 | | Tatoeba-test.eng-pap.eng.pap | 44.7 | 0.624 | | Tatoeba-test.eng-pau.eng.pau | 0.8 | 0.098 | | Tatoeba-test.eng-pdc.eng.pdc | 2.9 | 0.143 | | Tatoeba-test.eng-pms.eng.pms | 0.6 | 0.124 | | Tatoeba-test.eng-pol.eng.pol | 22.7 | 0.500 | | Tatoeba-test.eng-por.eng.por | 31.6 | 0.570 | | Tatoeba-test.eng-ppl.eng.ppl | 0.5 | 0.085 | | Tatoeba-test.eng-prg.eng.prg | 0.1 | 0.078 | | Tatoeba-test.eng-pus.eng.pus | 0.9 | 0.137 | | Tatoeba-test.eng-quc.eng.quc | 2.7 | 0.255 | | Tatoeba-test.eng-qya.eng.qya | 0.4 | 0.084 | | Tatoeba-test.eng-rap.eng.rap | 1.9 | 0.050 | | Tatoeba-test.eng-rif.eng.rif | 1.3 | 0.102 | | Tatoeba-test.eng-roh.eng.roh | 1.4 | 0.169 | | Tatoeba-test.eng-rom.eng.rom | 7.8 | 0.329 | | Tatoeba-test.eng-ron.eng.ron | 27.0 | 0.530 | | Tatoeba-test.eng-rue.eng.rue | 0.1 | 0.009 | | Tatoeba-test.eng-run.eng.run | 9.8 | 0.434 | | Tatoeba-test.eng-rus.eng.rus | 22.2 | 0.465 | | Tatoeba-test.eng-sag.eng.sag | 4.8 | 0.155 | | Tatoeba-test.eng-sah.eng.sah | 0.2 | 0.007 | | Tatoeba-test.eng-san.eng.san | 1.7 | 0.143 | | Tatoeba-test.eng-scn.eng.scn | 1.5 | 0.083 | | Tatoeba-test.eng-sco.eng.sco | 30.3 | 0.514 | | Tatoeba-test.eng-sgs.eng.sgs | 1.6 | 0.104 | | Tatoeba-test.eng-shs.eng.shs | 0.7 | 0.049 | | Tatoeba-test.eng-shy.eng.shy | 0.6 | 0.064 | | Tatoeba-test.eng-sin.eng.sin | 5.4 | 0.317 | | Tatoeba-test.eng-sjn.eng.sjn | 0.3 | 0.074 | | Tatoeba-test.eng-slv.eng.slv | 12.8 | 0.313 | | Tatoeba-test.eng-sma.eng.sma | 0.8 | 0.063 | | Tatoeba-test.eng-sme.eng.sme | 13.2 | 0.290 | | Tatoeba-test.eng-smo.eng.smo | 12.1 | 0.416 | | Tatoeba-test.eng-sna.eng.sna | 27.1 | 0.533 | | Tatoeba-test.eng-snd.eng.snd | 6.0 | 0.359 | | Tatoeba-test.eng-som.eng.som | 16.0 | 0.274 | | Tatoeba-test.eng-spa.eng.spa | 36.7 | 0.603 | | Tatoeba-test.eng-sqi.eng.sqi | 32.3 | 0.573 | | Tatoeba-test.eng-stq.eng.stq | 0.6 | 0.198 | | Tatoeba-test.eng-sun.eng.sun | 39.0 | 0.447 | | Tatoeba-test.eng-swa.eng.swa | 1.1 | 0.109 | | Tatoeba-test.eng-swe.eng.swe | 42.7 | 0.614 | | Tatoeba-test.eng-swg.eng.swg | 0.6 | 0.118 | | Tatoeba-test.eng-tah.eng.tah | 12.4 | 0.294 | | Tatoeba-test.eng-tam.eng.tam | 5.0 | 0.404 | | Tatoeba-test.eng-tat.eng.tat | 9.9 | 0.326 | | Tatoeba-test.eng-tel.eng.tel | 4.7 | 0.326 | | Tatoeba-test.eng-tet.eng.tet | 0.7 | 0.100 | | Tatoeba-test.eng-tgk.eng.tgk | 5.5 | 0.304 | | Tatoeba-test.eng-tha.eng.tha | 2.2 | 0.456 | | Tatoeba-test.eng-tir.eng.tir | 1.5 | 0.197 | | Tatoeba-test.eng-tlh.eng.tlh | 0.0 | 0.032 | | Tatoeba-test.eng-tly.eng.tly | 0.3 | 0.061 | | Tatoeba-test.eng-toi.eng.toi | 8.3 | 0.219 | | Tatoeba-test.eng-ton.eng.ton | 32.7 | 0.619 | | Tatoeba-test.eng-tpw.eng.tpw | 1.4 | 0.136 | | Tatoeba-test.eng-tso.eng.tso | 9.6 | 0.465 | | Tatoeba-test.eng-tuk.eng.tuk | 9.4 | 0.383 | | Tatoeba-test.eng-tur.eng.tur | 24.1 | 0.542 | | Tatoeba-test.eng-tvl.eng.tvl | 8.9 | 0.398 | | Tatoeba-test.eng-tyv.eng.tyv | 10.4 | 0.249 | | Tatoeba-test.eng-tzl.eng.tzl | 0.2 | 0.098 | | Tatoeba-test.eng-udm.eng.udm | 6.5 | 0.212 | | Tatoeba-test.eng-uig.eng.uig | 2.1 | 0.266 | | Tatoeba-test.eng-ukr.eng.ukr | 24.3 | 0.479 | | Tatoeba-test.eng-umb.eng.umb | 4.4 | 0.274 | | Tatoeba-test.eng-urd.eng.urd | 8.6 | 0.344 | | Tatoeba-test.eng-uzb.eng.uzb | 6.9 | 0.343 | | Tatoeba-test.eng-vec.eng.vec | 1.0 | 0.094 | | Tatoeba-test.eng-vie.eng.vie | 23.2 | 0.420 | | Tatoeba-test.eng-vol.eng.vol | 0.3 | 0.086 | | Tatoeba-test.eng-war.eng.war | 11.4 | 0.415 | | Tatoeba-test.eng-wln.eng.wln | 8.4 | 0.218 | | Tatoeba-test.eng-wol.eng.wol | 11.5 | 0.252 | | Tatoeba-test.eng-xal.eng.xal | 0.1 | 0.007 | | Tatoeba-test.eng-xho.eng.xho | 19.5 | 0.552 | | Tatoeba-test.eng-yid.eng.yid | 4.0 | 0.256 | | Tatoeba-test.eng-yor.eng.yor | 8.8 | 0.247 | | Tatoeba-test.eng-zho.eng.zho | 21.8 | 0.192 | | Tatoeba-test.eng-zul.eng.zul | 34.3 | 0.655 | | Tatoeba-test.eng-zza.eng.zza | 0.5 | 0.080 | ### System Info: - hf_name: eng-mul - source_languages: eng - target_languages: mul - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-mul/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'ca', 'es', 'os', 'eo', 'ro', 'fy', 'cy', 'is', 'lb', 'su', 'an', 'sq', 'fr', 'ht', 'rm', 'cv', 'ig', 'am', 'eu', 'tr', 'ps', 'af', 'ny', 'ch', 'uk', 'sl', 'lt', 'tk', 'sg', 'ar', 'lg', 'bg', 'be', 'ka', 'gd', 'ja', 'si', 'br', 'mh', 'km', 'th', 'ty', 'rw', 'te', 'mk', 'or', 'wo', 'kl', 'mr', 'ru', 'yo', 'hu', 'fo', 'zh', 'ti', 'co', 'ee', 'oc', 'sn', 'mt', 'ts', 'pl', 'gl', 'nb', 'bn', 'tt', 'bo', 'lo', 'id', 'gn', 'nv', 'hy', 'kn', 'to', 'io', 'so', 'vi', 'da', 'fj', 'gv', 'sm', 'nl', 'mi', 'pt', 'hi', 'se', 'as', 'ta', 'et', 'kw', 'ga', 'sv', 'ln', 'na', 'mn', 'gu', 'wa', 'lv', 'jv', 'el', 'my', 'ba', 'it', 'hr', 'ur', 'ce', 'nn', 'fi', 'mg', 'rn', 'xh', 'ab', 'de', 'cs', 'he', 'zu', 'yi', 'ml', 'mul'] - src_constituents: {'eng'} - tgt_constituents: {'sjn_Latn', 'cat', 'nan', 'spa', 'ile_Latn', 'pap', 'mwl', 'uzb_Latn', 'mww', 'hil', 'lij', 'avk_Latn', 'lad_Latn', 'lat_Latn', 'bos_Latn', 'oss', 'epo', 'ron', 'fry', 'cym', 'toi_Latn', 'awa', 'swg', 'zsm_Latn', 'zho_Hant', 'gcf_Latn', 'uzb_Cyrl', 'isl', 'lfn_Latn', 'shs_Latn', 'nov_Latn', 'bho', 'ltz', 'lzh', 'kur_Latn', 'sun', 'arg', 'pes_Thaa', 'sqi', 'uig_Arab', 'csb_Latn', 'fra', 'hat', 'liv_Latn', 'non_Latn', 'sco', 'cmn_Hans', 'pnb', 'roh', 'chv', 'ibo', 'bul_Latn', 'amh', 'lfn_Cyrl', 'eus', 'fkv_Latn', 'tur', 'pus', 'afr', 'brx_Latn', 'nya', 'acm', 'ota_Latn', 'cha', 'ukr', 'xal', 'slv', 'lit', 'zho_Hans', 'tmw_Latn', 'kjh', 'ota_Arab', 'war', 'tuk', 'sag', 'myv', 'hsb', 'lzh_Hans', 'ara', 'tly_Latn', 'lug', 'brx', 'bul', 'bel', 'vol_Latn', 'kat', 'gan', 'got_Goth', 'vro', 'ext', 'afh_Latn', 'gla', 'jpn', 'udm', 'mai', 'ary', 'sin', 'tvl', 'hif_Latn', 'cjy_Hant', 'bre', 'ceb', 'mah', 'nob_Hebr', 'crh_Latn', 'prg_Latn', 'khm', 'ang_Latn', 'tha', 'tah', 'tzl', 'aln', 'kin', 'tel', 'ady', 'mkd', 'ori', 'wol', 'aze_Latn', 'jbo', 'niu', 'kal', 'mar', 'vie_Hani', 'arz', 'yue', 'kha', 'san_Deva', 'jbo_Latn', 'gos', 'hau_Latn', 'rus', 'quc', 'cmn', 'yor', 'hun', 'uig_Cyrl', 'fao', 'mnw', 'zho', 'orv_Cyrl', 'iba', 'bel_Latn', 'tir', 'afb', 'crh', 'mic', 'cos', 'swh', 'sah', 'krl', 'ewe', 'apc', 'zza', 'chr', 'grc_Grek', 'tpw_Latn', 'oci', 'mfe', 'sna', 'kir_Cyrl', 'tat_Latn', 'gom', 'ido_Latn', 'sgs', 'pau', 'tgk_Cyrl', 'nog', 'mlt', 'pdc', 'tso', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'fuc', 'nob', 'qya', 'ben', 'tat', 'kab', 'min', 'srp_Latn', 'wuu', 'dtp', 'jbo_Cyrl', 'tet', 'bod', 'yue_Hans', 'zlm_Latn', 'lao', 'ind', 'grn', 'nav', 'kaz_Cyrl', 'rom', 'hye', 'kan', 'ton', 'ido', 'mhr', 'scn', 'som', 'rif_Latn', 'vie', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'fij', 'ina_Latn', 'cjy_Hans', 'jdt_Cyrl', 'gsw', 'glv', 'khm_Latn', 'smo', 'umb', 'sma', 'gil', 'nld', 'snd_Arab', 'arq', 'mri', 'kur_Arab', 'por', 'hin', 'shy_Latn', 'sme', 'rap', 'tyv', 'dsb', 'moh', 'asm', 'lad', 'yue_Hant', 'kpv', 'tam', 'est', 'frm_Latn', 'hoc_Latn', 'bam_Latn', 'kek_Latn', 'ksh', 'tlh_Latn', 'ltg', 'pan_Guru', 'hnj_Latn', 'cor', 'gle', 'swe', 'lin', 'qya_Latn', 'kum', 'mad', 'cmn_Hant', 'fuv', 'nau', 'mon', 'akl_Latn', 'guj', 'kaz_Latn', 'wln', 'tuk_Latn', 'jav_Java', 'lav', 'jav', 'ell', 'frr', 'mya', 'bak', 'rue', 'ita', 'hrv', 'izh', 'ilo', 'dws_Latn', 'urd', 'stq', 'tat_Arab', 'haw', 'che', 'pag', 'nno', 'fin', 'mlg', 'ppl_Latn', 'run', 'xho', 'abk', 'deu', 'hoc', 'lkt', 'lld_Latn', 'tzl_Latn', 'mdf', 'ike_Latn', 'ces', 'ldn_Latn', 'egl', 'heb', 'vec', 'zul', 'max_Latn', 'pes_Latn', 'yid', 'mal', 'nds'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mul/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mul/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: mul - short_pair: en-mul - chrF2_score: 0.451 - bleu: 22.4 - brevity_penalty: 0.987 - ref_len: 68724.0 - src_name: English - tgt_name: Multiple languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: mul - prefer_old: False - long_pair: eng-mul - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-ng
2021-01-18T08:13:15.000Z
[ "pytorch", "marian", "seq2seq", "en", "ng", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
70
transformers
--- tags: - translation --- ### opus-mt-en-ng * source languages: en * target languages: ng * OPUS readme: [en-ng](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ng/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ng/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ng/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ng/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ng | 24.8 | 0.496 |
Helsinki-NLP/opus-mt-en-nic
2021-01-18T08:13:21.000Z
[ "pytorch", "marian", "seq2seq", "en", "sn", "rw", "wo", "ig", "sg", "ee", "zu", "lg", "ts", "ln", "ny", "yo", "rn", "xh", "nic", "transformers", "translation", "license:apache-2.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "metadata.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
41
transformers
--- language: - en - sn - rw - wo - ig - sg - ee - zu - lg - ts - ln - ny - yo - rn - xh - nic tags: - translation license: apache-2.0 --- ### eng-nic * source group: English * target group: Niger-Kordofanian languages * OPUS readme: [eng-nic](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-nic/README.md) * model: transformer * source language(s): eng * target language(s): bam_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-nic/opus-2020-07-27.zip) * test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-nic/opus-2020-07-27.test.txt) * test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-nic/opus-2020-07-27.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-bam.eng.bam | 6.2 | 0.029 | | Tatoeba-test.eng-ewe.eng.ewe | 4.5 | 0.258 | | Tatoeba-test.eng-ful.eng.ful | 0.5 | 0.073 | | Tatoeba-test.eng-ibo.eng.ibo | 3.9 | 0.267 | | Tatoeba-test.eng-kin.eng.kin | 6.4 | 0.475 | | Tatoeba-test.eng-lin.eng.lin | 1.2 | 0.308 | | Tatoeba-test.eng-lug.eng.lug | 3.9 | 0.405 | | Tatoeba-test.eng.multi | 11.1 | 0.427 | | Tatoeba-test.eng-nya.eng.nya | 14.0 | 0.622 | | Tatoeba-test.eng-run.eng.run | 13.6 | 0.477 | | Tatoeba-test.eng-sag.eng.sag | 5.5 | 0.199 | | Tatoeba-test.eng-sna.eng.sna | 19.6 | 0.557 | | Tatoeba-test.eng-swa.eng.swa | 1.8 | 0.163 | | Tatoeba-test.eng-toi.eng.toi | 8.3 | 0.231 | | Tatoeba-test.eng-tso.eng.tso | 50.0 | 0.789 | | Tatoeba-test.eng-umb.eng.umb | 7.8 | 0.342 | | Tatoeba-test.eng-wol.eng.wol | 6.7 | 0.143 | | Tatoeba-test.eng-xho.eng.xho | 26.4 | 0.620 | | Tatoeba-test.eng-yor.eng.yor | 15.5 | 0.342 | | Tatoeba-test.eng-zul.eng.zul | 35.9 | 0.750 | ### System Info: - hf_name: eng-nic - source_languages: eng - target_languages: nic - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-nic/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'nic'] - src_constituents: {'eng'} - tgt_constituents: {'bam_Latn', 'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-nic/opus-2020-07-27.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-nic/opus-2020-07-27.test.txt - src_alpha3: eng - tgt_alpha3: nic - short_pair: en-nic - chrF2_score: 0.42700000000000005 - bleu: 11.1 - brevity_penalty: 1.0 - ref_len: 10625.0 - src_name: English - tgt_name: Niger-Kordofanian languages - train_date: 2020-07-27 - src_alpha2: en - tgt_alpha2: nic - prefer_old: False - long_pair: eng-nic - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-en-niu
2021-01-18T08:13:26.000Z
[ "pytorch", "marian", "seq2seq", "en", "niu", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
66
transformers
--- tags: - translation --- ### opus-mt-en-niu * source languages: en * target languages: niu * OPUS readme: [en-niu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-niu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-niu/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-niu/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-niu/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.niu | 53.0 | 0.698 |
Helsinki-NLP/opus-mt-en-nl
2021-02-21T08:27:30.000Z
[ "pytorch", "rust", "marian", "seq2seq", "en", "nl", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "rust_model.ot", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
2,946
transformers
--- tags: - translation --- ### opus-mt-en-nl * source languages: en * target languages: nl * OPUS readme: [en-nl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-nl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-04.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-nl/opus-2019-12-04.zip) * test set translations: [opus-2019-12-04.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nl/opus-2019-12-04.test.txt) * test set scores: [opus-2019-12-04.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nl/opus-2019-12-04.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.nl | 57.1 | 0.730 |
Helsinki-NLP/opus-mt-en-nso
2021-01-18T08:13:38.000Z
[ "pytorch", "marian", "seq2seq", "en", "nso", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
60
transformers
--- tags: - translation --- ### opus-mt-en-nso * source languages: en * target languages: nso * OPUS readme: [en-nso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-nso/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-nso/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nso/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nso/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.nso | 52.2 | 0.684 |
Helsinki-NLP/opus-mt-en-ny
2021-01-18T08:13:45.000Z
[ "pytorch", "marian", "seq2seq", "en", "ny", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
42
transformers
--- tags: - translation --- ### opus-mt-en-ny * source languages: en * target languages: ny * OPUS readme: [en-ny](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ny/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ny/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ny/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ny/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ny | 31.4 | 0.570 | | Tatoeba.en.ny | 26.8 | 0.645 |
Helsinki-NLP/opus-mt-en-nyk
2021-01-18T08:13:50.000Z
[ "pytorch", "marian", "seq2seq", "en", "nyk", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
68
transformers
--- tags: - translation --- ### opus-mt-en-nyk * source languages: en * target languages: nyk * OPUS readme: [en-nyk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-nyk/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-nyk/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nyk/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nyk/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.nyk | 26.6 | 0.511 |
Helsinki-NLP/opus-mt-en-om
2021-01-18T08:14:01.000Z
[ "pytorch", "marian", "seq2seq", "en", "om", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
44
transformers
--- tags: - translation --- ### opus-mt-en-om * source languages: en * target languages: om * OPUS readme: [en-om](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-om/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-om/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-om/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-om/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.om | 21.8 | 0.498 |
Helsinki-NLP/opus-mt-en-pag
2021-01-18T08:14:07.000Z
[ "pytorch", "marian", "seq2seq", "en", "pag", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "source.spm", "target.spm", "tokenizer_config.json", "vocab.json" ]
Helsinki-NLP
57
transformers
--- tags: - translation --- ### opus-mt-en-pag * source languages: en * target languages: pag * OPUS readme: [en-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-pag/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-pag/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pag/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pag/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.pag | 37.9 | 0.598 |