ahmetustun viraat commited on
Commit
00fbf39
1 Parent(s): 0cfc95d

Fix broken links (#4)

Browse files

- Fix broken links (3d472cc794aaf95e710e5b24fecdedae32f3abe3)


Co-authored-by: Viraat Aryabumi <viraat@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -125,13 +125,13 @@ metrics:
125
  > We release the checkpoints under a Apache-2.0 license to further our mission of multilingual technologies empowering a
126
  > multilingual world.
127
 
128
- - **Developed by:** Cohere For AI
129
  - **Model type:** a Transformer style autoregressive massively multilingual language model.
130
  - **Paper**: [Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model](https://arxiv.org/abs/2402.07827)
131
- - **Point of Contact**: Cohere For AI: [cohere.for.ai](cohere.for.ai)
132
  - **Languages**: Refer to the list of languages in the `language` section of this model card.
133
  - **License**: Apache-2.0
134
- - **Model**: [Aya](https://huggingface.co/CohereForAI/aya)
135
  - **Model Size**: 13 billion parameters
136
  - **Datasets**: [xP3x](https://huggingface.co/datasets/CohereForAI/xP3x), [Aya Dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset), [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection), [DataProvenance collection](https://huggingface.co/datasets/DataProvenanceInitiative/Commercially-Verified-Licenses), ShareGPT-Command.
137
 
@@ -180,7 +180,7 @@ The Aya model is trained on the following datasets:
180
  - [DataProvenance collection](https://huggingface.co/datasets/DataProvenanceInitiative/Commercially-Verified-Licenses)
181
  - ShareGPT-Command
182
 
183
- All datasets are subset to the 101 languages supported by [mT5]. See the [paper](https://arxiv.org/abs/2402.07827) for details about filtering and pruning.
184
 
185
  ## Evaluation
186
 
 
125
  > We release the checkpoints under a Apache-2.0 license to further our mission of multilingual technologies empowering a
126
  > multilingual world.
127
 
128
+ - **Developed by:** [Cohere For AI]((https://cohere.for.ai))
129
  - **Model type:** a Transformer style autoregressive massively multilingual language model.
130
  - **Paper**: [Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model](https://arxiv.org/abs/2402.07827)
131
+ - **Point of Contact**: Cohere For AI: [cohere.for.ai](https://cohere.for.ai)
132
  - **Languages**: Refer to the list of languages in the `language` section of this model card.
133
  - **License**: Apache-2.0
134
+ - **Model**: [Aya-101](https://huggingface.co/CohereForAI/aya-101)
135
  - **Model Size**: 13 billion parameters
136
  - **Datasets**: [xP3x](https://huggingface.co/datasets/CohereForAI/xP3x), [Aya Dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset), [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection), [DataProvenance collection](https://huggingface.co/datasets/DataProvenanceInitiative/Commercially-Verified-Licenses), ShareGPT-Command.
137
 
 
180
  - [DataProvenance collection](https://huggingface.co/datasets/DataProvenanceInitiative/Commercially-Verified-Licenses)
181
  - ShareGPT-Command
182
 
183
+ All datasets are subset to the 101 languages supported by [mT5](https://huggingface.co/google/mt5-xxl). See the [paper](https://arxiv.org/abs/2402.07827) for details about filtering and pruning.
184
 
185
  ## Evaluation
186