solar-merge-slerp / README.md
nebchi's picture
Update README.md
6387de8 verified
|
raw
history blame
No virus
3.55 kB
metadata
base_model:
  - davidkim205/nox-solar-10.7b-v2
  - chihoonlee10/T3Q-ko-solar-dpo-v6.0
library_name: transformers
tags:
  - mergekit
  - merge

model_storage

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model:
  model:
    path: chihoonlee10/T3Q-ko-solar-dpo-v6.0
dtype: float16
merge_method: slerp
parameters:
  t:
  - filter: self_attn
    value: [0.0, 0.5, 0.3, 0.7, 1.0]
  - filter: mlp
    value: [1.0, 0.5, 0.7, 0.3, 0.0]
  - value: 0.5
slices:
- sources:
  - layer_range: [0, 47]
    model:
      model:
        path: chihoonlee10/T3Q-ko-solar-dpo-v6.0
  - layer_range: [0, 47]
    model:
      model:
        path: davidkim205/nox-solar-10.7b-v2

Evaluation Results

모델 명칭 Average HellaSwag COPA BooIQ
KoGPT 58.2 55.9 73.5 45.1
Polyglot-ko-13B 62.4 59.5 79.4 48.2
LLaMA 2-13B 45.2 41.3 59.3 34.9
Baichuan 2-13B 52.7 39.2 60.6 58.4
QWEN-14B 47.8 45.3 64.9 33.4
Orion-14B-Chat 68.8 47.0 77.7 81.6
Ocelot-ko-10.8B 72.5 50.0 75.8 91.7
Evaluation Results
Model 글쓰기 이해 문법
gpt-3.5-turbo-0125 8.78 9.57 6.50
HyperClovaX 8.50 9.50 8.50
allganize/Llama-3-Alpha-Ko-8B-Instruct 8.50 8.35 4.92
Synatra-kiqu-7B 4.42 5.71 4.50
Ocelot-ko-10.8B 8.57 7.00 6.57