qwerrwe / scripts /finetune.py

Commit History

Feat: Swap to GenerationConfig
988aeb9

Nanobit commited on

Fix security issue or ignore false positives
a1f9850

Nanobit commited on

Apply isort then black
37293dc

Nanobit commited on

Lint finetune.py
82971e1

Nanobit commited on

Lint and format
392dfd9

Nanobit commited on

bnb fixes
21f17cc

winglian commited on

refactor: fix previous refactors
56f9ca5

Nanobit commited on

Refactor to use DictDefault instead
8bd7a49

Nanobit commited on

Fix load error
93acb64

Nanobit commited on

Convert attrdict to addict
bdfe7c9

Nanobit commited on

move list not in list logic to fn
cc67862

winglian commited on

load the tokenizer seperately from the model
32e6fe9

winglian commited on

add logging and make sure model unloads to float16
a5bf838

winglian commited on

remove un-needed code, add validation
1f5d83e

winglian commited on

Update scripts/finetune.py
3457810
unverified

winglian Nanobit commited on

Update scripts/finetune.py for logging
ae1719d
unverified

winglian Nanobit commited on

optionally be able to specify alpaca or chat style prompts
1d5ab84

winglian commited on

add alpaca multiple choice instruct dataset support
b46bc02

winglian commited on

reorder options so debug can happen in the same prepare step
f98e173

winglian commited on

more fixes
bdbca8f

winglian commited on

Fix typo
52aada7
unverified

Nanobit commited on

black formatting
2bc1a5b

winglian commited on

Update finetune.py
915c56c
unverified

winglian commited on

Don't save full model for lora
cd23959
unverified

Nanobit commited on

Save adapter for lora
71a1f7f
unverified

Nanobit commited on

fix whitespace and instruction on inference
47ad389

winglian commited on

refactor inference, warn if model is frozen
247825b

winglian commited on

support for multi line inference input, log sweep over learning rates
9105935

winglian commited on

support llama-adapter zero init attention
2255bb7

winglian commited on

fix sharegpt tokenization, refactor tokenization debugging
5159d00

winglian commited on

various bugfixes
94f5e41

winglian commited on

improve inference
d653859

winglian commited on

quickstart instructions for starting from runpod (#5)
0a472e1
unverified

winglian commited on

WIP large refactor to make finetune script a little more manageable (#3)
6045345
unverified

winglian commited on

add support for alpaca reflect training (#2)
81de0ef
unverified

winglian commited on

Tokenization open assistant (#1)
87d7825
unverified

winglian commited on

fix llama check
eb80890

winglian commited on

fix conditional check to prevent always using 4bit
8f36f3c

winglian commited on

imrpove llama check and fix safetensors file check
69164da

winglian commited on

suppport for alpaca-like instruction datasets without inputs
e107643

winglian commited on

casts the prepared data to int16 (doesn't help with training memory)
2db9436

winglian commited on

bugfixes
120e7df

winglian commited on

fix lora target module, require explicit flash attention, fix min logging steps, don't use adam8bit for int4, hash prepared datasets, support hf hub datasets
87e073d

winglian commited on

4bit quantized support (wip)
77fca25

winglian commited on

cleanup, prep for 4bit quant support
12de7b7

winglian commited on

deepspeed doesn't work with flash-attn, and the gpu savings w flash attn are better than the deepspeed headaches
d1aed4c

winglian commited on

fix logging
a459383

winglian commited on

prepare datasets only flag
2393801

winglian commited on

configure log level, add llama 7b config
d33a975

winglian commited on

more logging, wandb fixes
05fffb5

winglian commited on