Update README.md

#8
by MaziyarPanahi - opened
Files changed (1) hide show
  1. README.md +11 -14
README.md CHANGED
@@ -131,9 +131,18 @@ This is a fine-tuned version of the `Qwen/Qwen2-7B` model. It aims to improve th
131
 
132
  All GGUF models are available here: [MaziyarPanahi/calme-2.4-qwen2-7b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.4-qwen2-7b-GGUF)
133
 
134
- # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
 
135
 
136
- coming soon!
 
 
 
 
 
 
 
 
137
 
138
 
139
  # Prompt Template
@@ -174,16 +183,4 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
174
  tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.4-qwen2-7b")
175
  model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.4-qwen2-7b")
176
  ```
177
- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
178
- Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__calme-2.4-qwen2-7b)
179
-
180
- | Metric |Value|
181
- |-------------------|----:|
182
- |Avg. |22.52|
183
- |IFEval (0-Shot) |33.00|
184
- |BBH (3-Shot) |31.82|
185
- |MATH Lvl 5 (4-Shot)|18.35|
186
- |GPQA (0-shot) | 4.47|
187
- |MuSR (0-shot) |14.43|
188
- |MMLU-PRO (5-shot) |33.08|
189
 
 
131
 
132
  All GGUF models are available here: [MaziyarPanahi/calme-2.4-qwen2-7b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.4-qwen2-7b-GGUF)
133
 
134
+ # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
135
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__calme-2.4-qwen2-7b)
136
 
137
+ | Metric |Value|
138
+ |-------------------|----:|
139
+ |Avg. |22.52|
140
+ |IFEval (0-Shot) |33.00|
141
+ |BBH (3-Shot) |31.82|
142
+ |MATH Lvl 5 (4-Shot)|18.35|
143
+ |GPQA (0-shot) | 4.47|
144
+ |MuSR (0-shot) |14.43|
145
+ |MMLU-PRO (5-shot) |33.08|
146
 
147
 
148
  # Prompt Template
 
183
  tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.4-qwen2-7b")
184
  model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.4-qwen2-7b")
185
  ```
 
 
 
 
 
 
 
 
 
 
 
 
186