Edit model card

ENERGY-DRINK-LOVE/komt_DPOv3

Our Team

  • Youjin Chung
  • Jingyeom Kim

Model

Base Model

Hardware and Software

  • Hardware: A100 * 8 for training our model
  • Deepspeed library & Huggingface TRL Trainer

Dataset

  • DPO_dataset
    • ์ž์ฒด ์ œ์ž‘ dpo dataset(AI-hub dataset ํ™œ์šฉ)
    • OpenOrca DPO ๋“ฑ ์˜์–ด ๋ฐ์ดํ„ฐ์…‹ ๋ฒˆ์—ญ(ENERGY-DRINK-LOVE/translate_share_gpt_dedup_llama_SFT_1024, ์ž์ฒด๋ชจ๋ธ ํ™œ์šฉ)

Training Method

Benchmark

Ko LM Eval Harness

Ko-LLM-Leaderboard

  • (240316๊ธฐ์ค€ 4๋“ฑ)
  • image/png
Average Ko-ARC Ko-HellaSwag Ko-MMLU Ko-TruthfulQA Ko-CommonGen V2
61.20 57.51 70.33 53.34 68.49 56.32
Downloads last month
1,753
Safetensors
Model size
10.9B params
Tensor type
BF16
ยท
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for ENERGY-DRINK-LOVE/komt_DPOv3

Finetuned
this model