Edit model card

Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!

We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing

✨ Finetune for Free

All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.

Unsloth supports Free Notebooks Performance Memory use
Llama-3 8b ▶️ Start on Colab 2.4x faster 58% less
Gemma 7b ▶️ Start on Colab 2.4x faster 58% less
Mistral 7b ▶️ Start on Colab 2.2x faster 62% less

| Llama-2 7b | ▶️ Start on Colab | 2.2x faster | 43% less | | TinyLlama | ▶️ Start on Colab | 3.9x faster | 74% less | | CodeLlama 34b A100 | ▶️ Start on Colab | 1.9x faster | 27% less | | Mistral 7b 1xT4 | ▶️ Start on Kaggle | 5x faster* | 62% less | | DPO - Zephyr | ▶️ Start on Colab | 1.9x faster | 19% less |

Downloads last month
4,629
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for rombodawg/Meta-Llama-3.1-8B-Instruct-reuploaded

Finetuned
this model
Finetunes
2 models
Quantizations
1 model