Text Generation
Transformers
Safetensors
English
falcon_mamba
Eval Results
Inference Endpoints
Gkunsch commited on
Commit
230022c
1 Parent(s): 7f04837

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -153,7 +153,7 @@ print(tokenizer.decode(outputs[0]))
153
 
154
  ## Training Data
155
 
156
- Falcon-Mamba has been trained with ~ 5,500 GT mainly coming from [Refined-Web](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a large volume web-only dataset filtered and deduplicated.
157
  Similar to the others [Falcon](https://huggingface.co/tiiuae/falcon-11B) suite models, Falcon-Mamba has been trained leveraging a multi-stage training strategy to increase the context-length from 2,048 to 8,192.
158
  Moreover, inspired by the concept of Curriculum Learning, we carefully selected data mixtures throughout the training stages, considering both data diversity and complexity.
159
  Note that at inference the context-length is not relevant as the Mamba architecture has no limit on long range dependency.
 
153
 
154
  ## Training Data
155
 
156
+ Falcon-Mamba has been trained with ~ 6,000 GT mainly coming from [Refined-Web](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a large volume web-only dataset filtered and deduplicated.
157
  Similar to the others [Falcon](https://huggingface.co/tiiuae/falcon-11B) suite models, Falcon-Mamba has been trained leveraging a multi-stage training strategy to increase the context-length from 2,048 to 8,192.
158
  Moreover, inspired by the concept of Curriculum Learning, we carefully selected data mixtures throughout the training stages, considering both data diversity and complexity.
159
  Note that at inference the context-length is not relevant as the Mamba architecture has no limit on long range dependency.