winglian commited on
Commit
0de1457
1 Parent(s): 3cc67d2

try #2: pin hf transformers and accelerate to latest release, don't reinstall pytorch (#867)

Browse files

* isolate torch from the requirements.txt

* fix typo for removed line ending

* pin transformers and accelerate to latest releases

* try w auto-gptq==0.5.1

* update README to remove manual peft install

* pin xformers to 0.0.22

* bump flash-attn to 2.3.3

* pin flash attn to exact version

Files changed (3) hide show
  1. .github/workflows/tests.yml +1 -0
  2. README.md +0 -1
  3. requirements.txt +5 -7
.github/workflows/tests.yml CHANGED
@@ -71,6 +71,7 @@ jobs:
71
 
72
  - name: Install dependencies
73
  run: |
 
74
  pip3 uninstall -y transformers accelerate
75
  pip3 install -U -e .[flash-attn]
76
  pip3 install -r requirements-tests.txt
 
71
 
72
  - name: Install dependencies
73
  run: |
74
+ pip3 install --extra-index-url https://download.pytorch.org/whl/cu118 -U torch==2.0.1
75
  pip3 uninstall -y transformers accelerate
76
  pip3 install -U -e .[flash-attn]
77
  pip3 install -r requirements-tests.txt
README.md CHANGED
@@ -91,7 +91,6 @@ cd axolotl
91
 
92
  pip3 install packaging
93
  pip3 install -e '.[flash-attn,deepspeed]'
94
- pip3 install -U git+https://github.com/huggingface/peft.git
95
 
96
  # finetune lora
97
  accelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml
 
91
 
92
  pip3 install packaging
93
  pip3 install -e '.[flash-attn,deepspeed]'
 
94
 
95
  # finetune lora
96
  accelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml
requirements.txt CHANGED
@@ -1,22 +1,20 @@
1
- --extra-index-url https://download.pytorch.org/whl/cu118
2
  --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
3
- torch==2.0.1
4
- auto-gptq==0.4.2
5
  packaging
6
  peft==0.6.0
7
- transformers @ git+https://github.com/huggingface/transformers.git@acc394c4f5e1283c19783581790b3dc3105a3697
8
  bitsandbytes>=0.41.1
9
- accelerate @ git+https://github.com/huggingface/accelerate@80da9cfb09bb3cc9f1b385cb55d6b90d025a5fd9
10
  deepspeed
11
  addict
12
  fire
13
  PyYAML>=6.0
14
  datasets>=2.14.0
15
- flash-attn>=2.3.0
16
  sentencepiece
17
  wandb
18
  einops
19
- xformers>=0.0.22
20
  optimum==1.13.2
21
  hf_transfer
22
  colorama
 
 
1
  --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
2
+ auto-gptq==0.5.1
 
3
  packaging
4
  peft==0.6.0
5
+ transformers==4.35.1
6
  bitsandbytes>=0.41.1
7
+ accelerate==0.24.1
8
  deepspeed
9
  addict
10
  fire
11
  PyYAML>=6.0
12
  datasets>=2.14.0
13
+ flash-attn==2.3.3
14
  sentencepiece
15
  wandb
16
  einops
17
+ xformers==0.0.22
18
  optimum==1.13.2
19
  hf_transfer
20
  colorama