TinyLlama-1.1B-2.5T-chat
It was created by starting with the TinyLlama-1.1B-2.5T-chat and training it on a llama dataset. We have attached the wandb report in pdf form to view the training run at a glance.
Reson
This model was fine tuned to allow it to follow direction and is a steeping stone to further training.
Referrals
Run Pod - This is who I use to train th emodels on huggingface. If you use it we both get free crdits. - Visit Runpod's Website!
Paypal - If you want to leave a tip, it is appecaheted. - Visit My Paypal!
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 36.93 |
AI2 Reasoning Challenge (25-Shot) | 34.47 |
HellaSwag (10-Shot) | 59.71 |
MMLU (5-Shot) | 26.45 |
TruthfulQA (0-shot) | 38.80 |
Winogrande (5-shot) | 61.01 |
GSM8k (5-shot) | 1.14 |
- Downloads last month
- 64
Inference API (serverless) is not available, repository is disabled.
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard34.470
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard59.710
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard26.450
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard38.800
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard61.010
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard1.140