Edit model card

relu_llama_7b_hf_fp16_refined_web_relu_2024-03-27

This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 3.6852

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 1
  • eval_batch_size: 2
  • seed: 0
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • total_eval_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 200
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
10.1534 0.01 25 9.9183
8.7138 0.02 50 8.3260
7.3744 0.02 75 7.3115
6.2344 0.03 100 6.1079
5.5305 0.04 125 5.1969
4.5244 0.05 150 4.5551
4.0661 0.06 175 4.1037
3.8614 0.06 200 3.7818

Framework versions

  • Transformers 4.40.0.dev0
  • Pytorch 2.1.1+cu121
  • Datasets 2.15.0
  • Tokenizers 0.15.2
Downloads last month
4
Safetensors
Model size
6.74B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for thrunlab/relu_llama_7b_hf_fp16_refined_web_relu_2024-03-27

Finetuned
(558)
this model