Nanobit commited on
Commit
2b222de
1 Parent(s): 6abfd87

Update peft and gptq instruction

Browse files
Files changed (1) hide show
  1. README.md +14 -3
README.md CHANGED
@@ -53,6 +53,7 @@ accelerate launch scripts/finetune.py examples/lora-openllama-3b/config.yml \
53
  docker run --gpus '"all"' --rm -it winglian/axolotl:main-py3.9-cu118-2.0.0
54
  ```
55
  - `winglian/axolotl-runpod:main-py3.9-cu118-2.0.0`: for runpod
 
56
  - `winglian/axolotl:dev`: dev branch (not usually up to date)
57
 
58
  Or run on the current files for development:
@@ -67,9 +68,19 @@ accelerate launch scripts/finetune.py examples/lora-openllama-3b/config.yml \
67
  2. Install pytorch stable https://pytorch.org/get-started/locally/
68
 
69
  3. Install python dependencies with ONE of the following:
70
- - `pip3 install -e .` (recommended, supports QLoRA, no gptq/int4 support)
71
- - `pip3 install -e .[gptq]` (next best if you don't need QLoRA, but want to use gptq)
72
- - `pip3 install -e .[gptq_triton]`
 
 
 
 
 
 
 
 
 
 
73
 
74
  - LambdaLabs
75
  <details>
 
53
  docker run --gpus '"all"' --rm -it winglian/axolotl:main-py3.9-cu118-2.0.0
54
  ```
55
  - `winglian/axolotl-runpod:main-py3.9-cu118-2.0.0`: for runpod
56
+ - `winglian/axolotl-runpod:main-py3.9-cu118-2.0.0-gptq`: for gptq
57
  - `winglian/axolotl:dev`: dev branch (not usually up to date)
58
 
59
  Or run on the current files for development:
 
68
  2. Install pytorch stable https://pytorch.org/get-started/locally/
69
 
70
  3. Install python dependencies with ONE of the following:
71
+ - Recommended, supports QLoRA, NO gptq/int4 support
72
+ ```bash
73
+ pip3 install -U git+https://github.com/huggingface/peft.git
74
+ pip3 install -e .
75
+ ```
76
+ - gptq/int4 support, NO QLoRA
77
+ ```bash
78
+ pip3 install -e .[gptq]
79
+ ```
80
+ - same as above but not recommended
81
+ ```bash
82
+ pip3 install -e .[gptq_triton]
83
+ ```
84
 
85
  - LambdaLabs
86
  <details>