mlabonne commited on
Commit
49c1157
1 Parent(s): f4e9f38

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +95 -0
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: teknium/OpenHermes-2.5-Mistral-7B
3
+ tags:
4
+ - mistral
5
+ - instruct
6
+ - finetune
7
+ - chatml
8
+ - gpt4
9
+ - synthetic data
10
+ - distillation
11
+ - dpo
12
+ - rlhf
13
+ license: apache-2.0
14
+ language:
15
+ - en
16
+ datasets:
17
+ - mlabonne/chatml_dpo_pairs
18
+ ---
19
+
20
+ <center><img src="https://i.imgur.com/qIhaFNM.png"></center>
21
+
22
+ # NeuralHermes 2.5 - Mistral 7B - GGUF
23
+
24
+ NeuralHermes is an [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) model that has been further fine-tuned with Direct Preference Optimization (DPO) using the [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) dataset.
25
+
26
+ It is directly inspired by the RLHF process described by [neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1)'s authors to improve performance. I used the same dataset and reformatted it to apply the ChatML template. I haven't performed a comprehensive evaluation of the model, but it works great, nothing broken apparently! :)
27
+
28
+ The code to train this model is available on [Google Colab](https://colab.research.google.com/drive/15iFBr1xWgztXvhrj5I9fBv20c7CFOPBE?usp=sharing) and [GitHub](https://github.com/mlabonne/llm-course/tree/main). It required an A100 GPU for about an hour.
29
+
30
+ Link to the original model: [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B).
31
+
32
+ Article and code to quantize your own LLMs: [Quantize Llama models with GGUF and llama.cpp](https://mlabonne.github.io/blog/posts/Quantize_Llama_2_models_using_ggml.html)
33
+
34
+ ## Usage
35
+
36
+ You can run this model using [LM Studio](https://lmstudio.ai/) or any other frontend.
37
+
38
+ You can also run this model using the following code:
39
+
40
+ ```python
41
+ import transformers
42
+ from transformers import AutoTokenizer
43
+
44
+ # Format prompt
45
+ message = [
46
+ {"role": "system", "content": "You are a helpful assistant chatbot."},
47
+ {"role": "user", "content": "What is a Large Language Model?"}
48
+ ]
49
+ tokenizer = AutoTokenizer.from_pretrained(new_model)
50
+ prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
51
+
52
+ # Create pipeline
53
+ pipeline = transformers.pipeline(
54
+ "text-generation",
55
+ model=new_model,
56
+ tokenizer=tokenizer
57
+ )
58
+
59
+ # Generate text
60
+ sequences = pipeline(
61
+ prompt,
62
+ do_sample=True,
63
+ temperature=0.7,
64
+ top_p=0.9,
65
+ num_return_sequences=1,
66
+ max_length=200,
67
+ )
68
+ print(sequences[0]['generated_text'])
69
+ ```
70
+
71
+
72
+ ## Training hyperparameters
73
+
74
+ **LoRA**:
75
+ * r=16,
76
+ * lora_alpha=16,
77
+ * lora_dropout=0.05,
78
+ * bias="none",
79
+ * task_type="CAUSAL_LM",
80
+ * target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
81
+
82
+ **Training arguments**:
83
+ * per_device_train_batch_size=4,
84
+ * gradient_accumulation_steps=4,
85
+ * gradient_checkpointing=True,
86
+ * learning_rate=5e-5,
87
+ * lr_scheduler_type="cosine",
88
+ * max_steps=200,
89
+ * optim="paged_adamw_32bit",
90
+ * warmup_steps=100,
91
+
92
+ **DPOTrainer**:
93
+ * beta=0.1,
94
+ * max_prompt_length=1024,
95
+ * max_length=1536,