leaderboard-pr-bot commited on
Commit
9656ba3
1 Parent(s): 6473f99

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -53,4 +53,17 @@ The model will automatically emit an end-of-text token (`</s>`) when it judges t
53
 
54
  The intended use-case for this model is fictional conversation for entertainment purposes. Any other sort of usage is out of scope.
55
 
56
- As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
  The intended use-case for this model is fictional conversation for entertainment purposes. Any other sort of usage is out of scope.
55
 
56
+ As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
57
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
58
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Neko-Institute-of-Science__pygmalion-7b)
59
+
60
+ | Metric | Value |
61
+ |-----------------------|---------------------------|
62
+ | Avg. | 40.29 |
63
+ | ARC (25-shot) | 51.37 |
64
+ | HellaSwag (10-shot) | 77.81 |
65
+ | MMLU (5-shot) | 35.68 |
66
+ | TruthfulQA (0-shot) | 34.54 |
67
+ | Winogrande (5-shot) | 72.22 |
68
+ | GSM8K (5-shot) | 4.62 |
69
+ | DROP (3-shot) | 5.79 |