ruslandev's picture
Update README.md
400d759 verified
|
raw
history blame
No virus
4.59 kB
metadata
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
  - generated_from_trainer
model-index:
  - name: >-
      home/ubuntu/llm_training/axolotl/llama3-8b-gpt-4o-ru/output_llama3_8b_gpt_4o_ru
    results: []
datasets:
  - ruslandev/tagengo-rus-gpt-4o

Llama-3 8B GPT-4o-RU1.0

[Dataset]

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct. The idea behind this model is to train on a dataset derived from a smaller subset of the tagengo-gpt4, but with improved data quality. I tried to achieve higher data quality by prompting GPT-4o, the latest OpenAI's LLM with better multilingual capabilities. The training objective is primarily focused on the Russian language (80% of the training examples). The model shows promising results on the MT-Bench evaluation benchmark, surpassing GPT-3.5-turbo and being on par with Suzume in Russian language scores, even though the latter is trained on 8x bigger and more diverse dataset.

Evaluation scores

I achieved the following scores on Ru/En MT-Bench:

meta-llama/Meta-Llama-3-8B-Instruct ruslandev/llama-3-8b-gpt-4o-ru1.0 lightblue/suzume-llama-3-8B-multilingual Nexusflow/Starling-LM-7B-beta gpt-3.5-turbo
Russian 🇷🇺 NaN 8.12 8.19 8.06 7.94
English 🇺🇸 7.98 8.01 7.73 7.92 8.26

Training procedure

Built with Axolotl

See axolotl config

axolotl version: 0.4.1

base_model: meta-llama/Meta-Llama-3-8B-Instruct
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer  # PreTrainedTokenizerFast

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: ruslandev/tagengo-rus-gpt-4o
    type: sharegpt
    conversation: llama-3
dataset_prepared_path: /home/ubuntu/llm_training/axolotl/llama3-8b-gpt-4o-ru/prepared_tagengo_rus
val_set_size: 0.01
output_dir: /home/ubuntu/llm_training/axolotl/llama3-8b-gpt-4o-ru/output_llama3_8b_gpt_4o_ru

sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false

use_wandb: false
#wandb_project: axolotl
#wandb_entity: wandb_entity
#wandb_name: llama_3_8b_gpt_4o_ru

gradient_accumulation_steps: 2
micro_batch_size: 2
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 1e-5

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10
evals_per_epoch: 5
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed: /home/ubuntu/axolotl/deepspeed_configs/zero2.json
weight_decay: 0.0
special_tokens:
  pad_token: <|end_of_text|>

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • total_eval_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
1.1347 0.016 1 1.1086
0.916 0.208 13 0.8883
0.8494 0.416 26 0.8072
0.8657 0.624 39 0.7814
0.8077 0.832 52 0.7702

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.2.2+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1