brittlewis12 commited on
Commit
29fb6a3
1 Parent(s): 74f0012

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -0
README.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: euclaise/Memphis-CoT-3B
3
+ datasets:
4
+ - euclaise/TinyCoT
5
+ - euclaise/reddit-instruct
6
+ - sablo/oasst2_curated
7
+ license: cc-by-sa-3.0
8
+ language:
9
+ - en
10
+ model_creator: euclaise
11
+ model_name: Memphis-CoT-3B
12
+ model_type: stablelm_epoch
13
+ inference: false
14
+ tags:
15
+ - supertrainer2000
16
+ - human-data
17
+ - stablelm_epoch
18
+ pipeline_tag: text-generation
19
+ prompt_template: |
20
+ {{system_message}}
21
+ ### User:
22
+ {{prompt}}
23
+ ### Assistant:
24
+
25
+ quantized_by: brittlewis12
26
+ ---
27
+
28
+ # Memphis-CoT-3B GGUF
29
+
30
+ ![](https://cdn-uploads.huggingface.co/production/uploads/64137e2150358a805203cbac/DlTWku8gant1yx6NaxqJX.png)
31
+
32
+ Original model: [Memphis-CoT-3B](https://huggingface.co/euclaise/Memphis-CoT-3B)
33
+ Model creator: [euclaise](https://huggingface.co/euclaise)
34
+
35
+ This repo contains GGUF format model files for euclaise’s Memphis-CoT-3B.
36
+
37
+ > Memphis-CoT is a finetune of StableLM 3b 4e1t on TinyCoT, along with reddit-instruct (subset to 5000 examples, excluding posts with brackets in the title) and a curated subset of oasst2.
38
+
39
+
40
+ ### What is GGUF?
41
+
42
+ GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
43
+ Converted using llama.cpp b2022 ([8f8ddfc](https://github.com/ggerganov/llama.cpp/commits/8f8ddfcfadc830b936318c3ea9fe2e8e3365aa85))
44
+
45
+ ### Prompt template:
46
+
47
+ ```
48
+ {{system_message}}
49
+ ### User:
50
+ {{prompt}}
51
+ ### Assistant:
52
+ ```
53
+
54
+ or Tiny CoT:
55
+ ```
56
+ ### User:
57
+ {{prompt}}
58
+ ### Rationale:
59
+ [...]
60
+ ### Answer:
61
+ ```
62
+
63
+ ---
64
+
65
+ ## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac!
66
+
67
+ ![cnvrs.ai](https://pbs.twimg.com/profile_images/1744049151241797632/0mIP-P9e_400x400.jpg)
68
+
69
+ [cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device:
70
+ - create & save **Characters** with custom system prompts & temperature settings
71
+ - download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)!
72
+ - make it your own with custom **Theme colors**
73
+ - powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming!
74
+ - **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)!
75
+ - follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date
76
+
77
+ ---
78
+
79
+ ## Original Model Evaluations:
80
+
81
+ | Model | Size | Data | Method | GSM8K (5-shot) | AGIEval (English/Nous subset, acc_norm) | BIG Bench Hard (CoT, few-shot*) |
82
+ |:-----------------------------------------------------------------------|--------|:--------------------|---------------|:---------------|:----------------------------------------|:------------------------------ |
83
+ | [StableLM 3B Base](https://hf.co/stabilityai/stablelm-3b-4e1t) | 3B | Base | Base | 2.05% | 25.14% | 36.75% |
84
+ | [StableHermes 3B](https://hf.co/cxllin/StableHermes-3b) | 3B | GPT | SFT | 3.64% | 24.31% | *37.28%* |
85
+ | [MPT 7B Instruct](https://hf.co/mosaicml/mpt-7b-instruct) | **7B** | **Human**+Anthropic | SFT | 2.05% | 24.12% | 11.01% |
86
+ | [OpenLLaMA 7B v2 open-instruct](http://hf.co/VMware/open-llama-7b-v2-open-instruct) | **7B** | **Human** (nearly: ecqa is an exception) | SFT | 8.64% | 23.21% | 29.84% |
87
+ | [StableLM Zephyr 3B](https://hf.co/stabilityai/stablelm-zephyr-3b) | 3B | GPT | DPO | possibly contaminated (45.72%) | **33.31%** | 0.91% |
88
+ | [**Memphis-CoT 3B**](https://hf.co/euclaise/memphis-cot-3b) | 3B | **Human** | Self-teaching | **13.8%** | *26.24%* | **38.24%** |
89
+ *5-shot, as performed automatically by LM Evaluation Harness bbh_cot_fewshot even with num_fewshot=0
90
+
91
+ > Memphis outperforms other primarily-human-data models that are over twice its size, along with SFT models of its size, and trades with the Zephyr DPO model. That said, Zephyr uses synthetic data, and *much* more of it.