leaderboard-pr-bot commited on
Commit
7822444
1 Parent(s): 319b742

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +121 -4
README.md CHANGED
@@ -1,6 +1,10 @@
1
  ---
 
 
 
2
  tags:
3
  - chatml
 
4
  datasets:
5
  - HuggingFaceH4/ultrachat_200k
6
  - teknium/OpenHermes-2.5
@@ -11,13 +15,126 @@ datasets:
11
  - m-a-p/Code-Feedback
12
  - ise-uiuc/Magicoder-Evol-Instruct-110K
13
  - ise-uiuc/Magicoder-OSS-Instruct-75K
14
- language:
15
- - en
16
- base_model: google/gemma-2b
17
- license: other
18
  license_name: gemma-terms-of-use
19
  license_link: https://ai.google.dev/gemma/terms
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ---
21
 
22
 
23
  This is just a preview model. It is a finetuned gemma-2b with added chatml tokens.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: other
5
  tags:
6
  - chatml
7
+ base_model: google/gemma-2b
8
  datasets:
9
  - HuggingFaceH4/ultrachat_200k
10
  - teknium/OpenHermes-2.5
 
15
  - m-a-p/Code-Feedback
16
  - ise-uiuc/Magicoder-Evol-Instruct-110K
17
  - ise-uiuc/Magicoder-OSS-Instruct-75K
 
 
 
 
18
  license_name: gemma-terms-of-use
19
  license_link: https://ai.google.dev/gemma/terms
20
+ model-index:
21
+ - name: new_model_test
22
+ results:
23
+ - task:
24
+ type: text-generation
25
+ name: Text Generation
26
+ dataset:
27
+ name: AI2 Reasoning Challenge (25-Shot)
28
+ type: ai2_arc
29
+ config: ARC-Challenge
30
+ split: test
31
+ args:
32
+ num_few_shot: 25
33
+ metrics:
34
+ - type: acc_norm
35
+ value: 52.56
36
+ name: normalized accuracy
37
+ source:
38
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=pansophic/new_model_test
39
+ name: Open LLM Leaderboard
40
+ - task:
41
+ type: text-generation
42
+ name: Text Generation
43
+ dataset:
44
+ name: HellaSwag (10-Shot)
45
+ type: hellaswag
46
+ split: validation
47
+ args:
48
+ num_few_shot: 10
49
+ metrics:
50
+ - type: acc_norm
51
+ value: 73.65
52
+ name: normalized accuracy
53
+ source:
54
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=pansophic/new_model_test
55
+ name: Open LLM Leaderboard
56
+ - task:
57
+ type: text-generation
58
+ name: Text Generation
59
+ dataset:
60
+ name: MMLU (5-Shot)
61
+ type: cais/mmlu
62
+ config: all
63
+ split: test
64
+ args:
65
+ num_few_shot: 5
66
+ metrics:
67
+ - type: acc
68
+ value: 46.02
69
+ name: accuracy
70
+ source:
71
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=pansophic/new_model_test
72
+ name: Open LLM Leaderboard
73
+ - task:
74
+ type: text-generation
75
+ name: Text Generation
76
+ dataset:
77
+ name: TruthfulQA (0-shot)
78
+ type: truthful_qa
79
+ config: multiple_choice
80
+ split: validation
81
+ args:
82
+ num_few_shot: 0
83
+ metrics:
84
+ - type: mc2
85
+ value: 51.25
86
+ source:
87
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=pansophic/new_model_test
88
+ name: Open LLM Leaderboard
89
+ - task:
90
+ type: text-generation
91
+ name: Text Generation
92
+ dataset:
93
+ name: Winogrande (5-shot)
94
+ type: winogrande
95
+ config: winogrande_xl
96
+ split: validation
97
+ args:
98
+ num_few_shot: 5
99
+ metrics:
100
+ - type: acc
101
+ value: 66.38
102
+ name: accuracy
103
+ source:
104
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=pansophic/new_model_test
105
+ name: Open LLM Leaderboard
106
+ - task:
107
+ type: text-generation
108
+ name: Text Generation
109
+ dataset:
110
+ name: GSM8k (5-shot)
111
+ type: gsm8k
112
+ config: main
113
+ split: test
114
+ args:
115
+ num_few_shot: 5
116
+ metrics:
117
+ - type: acc
118
+ value: 37.91
119
+ name: accuracy
120
+ source:
121
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=pansophic/new_model_test
122
+ name: Open LLM Leaderboard
123
  ---
124
 
125
 
126
  This is just a preview model. It is a finetuned gemma-2b with added chatml tokens.
127
+
128
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
129
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pansophic__new_model_test)
130
+
131
+ | Metric |Value|
132
+ |---------------------------------|----:|
133
+ |Avg. |54.63|
134
+ |AI2 Reasoning Challenge (25-Shot)|52.56|
135
+ |HellaSwag (10-Shot) |73.65|
136
+ |MMLU (5-Shot) |46.02|
137
+ |TruthfulQA (0-shot) |51.25|
138
+ |Winogrande (5-shot) |66.38|
139
+ |GSM8k (5-shot) |37.91|
140
+