File size: 3,684 Bytes
d657f07
 
1e6fffa
 
 
 
 
 
5ac7651
1e6fffa
 
 
6a47952
 
 
3fca801
6a47952
 
 
d657f07
1e6fffa
 
 
 
6a47952
1e6fffa
d214ef9
1e6fffa
6a47952
1e6fffa
fab6e60
 
6a47952
1e6fffa
 
 
0b6f687
1e6fffa
d8695d4
 
 
 
 
 
 
 
 
 
 
 
858c6f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d8695d4
 
 
 
 
 
1e6fffa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6a47952
1e6fffa
f4907d3
 
 
 
 
 
 
 
 
 
1e6fffa
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
---
license: llama2
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: BlackSamorez/Llama-2-70b-AQLM-2Bit-1x16-hf
inference: false
model-index:
- name: llama2_70b_aqlm_toolcall
  results: []
datasets:
- vicgalle/alpaca-gpt4
- glaiveai/glaive-function-calling-v2
- hiyouga/glaive-function-calling-v2-sharegpt
language:
- en
pipeline_tag: text-generation
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# LLaMA-2 70B AQLM 2-bit QLoRA with function calling

This model is fine-tuned from [BlackSamorez/Llama-2-70b-AQLM-2Bit-1x16-hf](https://huggingface.co/BlackSamorez/Llama-2-70b-AQLM-2Bit-1x16-hf) using [LLaMA Factory](https://github.com/hiyouga/LLaMA-Factory).

The maximum GPU usage during training is **24GB**, and the model has preliminary conversation and tool-using abilities.

It requires at least 20GB GRAM at inference.

![examples](examples.png)

## Training and evaluation data

This model is fine-tuned using 2,000 examples of the Alpaca-GPT4 and Glaive-function-calling-v2 datasets.

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from peft import PeftModel

tokenizer = AutoTokenizer.from_pretrained("hiyouga/Llama-2-70b-AQLM-2Bit-QLoRA-function-calling")
model = AutoModelForCausalLM.from_pretrained("BlackSamorez/Llama-2-70b-AQLM-2Bit-1x16-hf", torch_dtype="auto", device_map="auto")
model = PeftModel.from_pretrained(model, "hiyouga/Llama-2-70b-AQLM-2Bit-QLoRA-function-calling")
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

messages = [
    {
      "role": "system",
      "content": (
        "You have access to the following tools:\n"
        "> Tool Name: get_current_weather\nTool Description: Get the current weather in a given location\n"
        "Tool Args:\n"
        "  - location (string, required): The city and state, e.g. San Francisco, CA\n"
        "  - unit (string): should be one of [\"celsius\", \"fahrenheit\"]\n\n"
        "Use the following format if using a tool:\n"
        "```\n"
        "Action: tool name (one of [get_current_weather]).\n"
        "Action Input: the input to the tool, in a JSON format representing the kwargs "
        "(e.g. ```{\"input\": \"hello world\", \"num_beams\": 5}```).\n"
        "```\n"
      )
    },
    {"role": "user", "content": "What is the weather like in Boston?"}
]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
inputs = inputs.to("cuda")
generate_ids = model.generate(inputs, streamer=streamer)
```

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1.0
- mixed_precision_training: Native AMP

### Training results

![loss](train_loss.png)

### Benchmark results

| MMLU Benchmark  | Bits | Metric        | Accurary |
| --------------- | ---- | ------------- | -------- |
| Average         | 2    | 5-shot, top-1 | 62.38    |
| STEM            | 2    | 5-shot, top-1 | 51.57    |
| Social Sciences | 2    | 5-shot, top-1 | 73.44    |
| Humanities      | 2    | 5-shot, top-1 | 57.82    |
| Other           | 2    | 5-shot, top-1 | 68.56    |

### Framework versions

- PEFT 0.9.0
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.2