Update README.md
Browse files
README.md
CHANGED
@@ -4,12 +4,31 @@ base_model: meta-llama/Meta-Llama-3-8B-Instruct
|
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
model-index:
|
7 |
-
- name:
|
|
|
8 |
results: []
|
|
|
|
|
9 |
---
|
10 |
|
11 |
-
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
15 |
<details><summary>See axolotl config</summary>
|
@@ -78,26 +97,6 @@ special_tokens:
|
|
78 |
|
79 |
</details><br>
|
80 |
|
81 |
-
# home/ubuntu/llm_training/axolotl/llama3-8b-gpt-4o-ru/output_llama3_8b_gpt_4o_ru
|
82 |
-
|
83 |
-
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
|
84 |
-
It achieves the following results on the evaluation set:
|
85 |
-
- Loss: 0.7702
|
86 |
-
|
87 |
-
## Model description
|
88 |
-
|
89 |
-
More information needed
|
90 |
-
|
91 |
-
## Intended uses & limitations
|
92 |
-
|
93 |
-
More information needed
|
94 |
-
|
95 |
-
## Training and evaluation data
|
96 |
-
|
97 |
-
More information needed
|
98 |
-
|
99 |
-
## Training procedure
|
100 |
-
|
101 |
### Training hyperparameters
|
102 |
|
103 |
The following hyperparameters were used during training:
|
@@ -131,4 +130,4 @@ The following hyperparameters were used during training:
|
|
131 |
- Transformers 4.41.1
|
132 |
- Pytorch 2.2.2+cu121
|
133 |
- Datasets 2.19.1
|
134 |
-
- Tokenizers 0.19.1
|
|
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
model-index:
|
7 |
+
- name: >-
|
8 |
+
home/ubuntu/llm_training/axolotl/llama3-8b-gpt-4o-ru/output_llama3_8b_gpt_4o_ru
|
9 |
results: []
|
10 |
+
datasets:
|
11 |
+
- ruslandev/tagengo-rus-gpt-4o
|
12 |
---
|
13 |
|
14 |
+
# Llama-3 8B GPT-4o-RU-1.0
|
15 |
+
|
16 |
+
[[Dataset]](https://huggingface.co/datasets/ruslandev/tagengo-rus-gpt-4o)
|
17 |
+
|
18 |
+
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
|
19 |
+
The idea behind this model is to train on a dataset derived from a smaller subset of the [tagengo-gpt4](https://huggingface.co/datasets/lightblue/tagengo-gpt4), but with improved data quality.
|
20 |
+
I tried to achieve higher data quality by prompting GPT-4o, the latest OpenAI's LLM with better multilingual capabilities. The training objective is primarily focused on the Russian language (80% of the training examples).
|
21 |
+
The model shows promising results on the MT-Bench evaluation benchmark, surpassing GPT-3.5 and being on par with [Suzume](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) in Russian language scores,
|
22 |
+
even though the latter is trained on 8x bigger and more diverse dataset.
|
23 |
+
|
24 |
+
## Evaluation scores
|
25 |
+
|
26 |
+
| |meta-llama/Meta-Llama-3-8B-Instruct | ruslandev/llama-3-8b-gpt-4o-ru1.0 | lightblue/suzume-llama-3-8B-multilingual | Nexusflow/Starling-LM-7B-beta | gpt-3.5-turbo |
|
27 |
+
|:----------:|:----------------------------------:|:---------------------------------:|:----------------------------------------:|:-----------------------------:|:-------------:|
|
28 |
+
| Russian 🇷🇺 | NaN | 8.12 | 8.19 | 8.06 | 7.94 |
|
29 |
+
| English 🇺🇸 | 7.98 | 8.01 | 7.73 | 7.92 | 8.26 |
|
30 |
+
|
31 |
+
## Training procedure
|
32 |
|
33 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
34 |
<details><summary>See axolotl config</summary>
|
|
|
97 |
|
98 |
</details><br>
|
99 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
100 |
### Training hyperparameters
|
101 |
|
102 |
The following hyperparameters were used during training:
|
|
|
130 |
- Transformers 4.41.1
|
131 |
- Pytorch 2.2.2+cu121
|
132 |
- Datasets 2.19.1
|
133 |
+
- Tokenizers 0.19.1
|