Commit History

Merge pull request #13 from winglian/dev
cb9a887
unverified

winglian commited on

Merge pull request #12 from NanoCode012/feat/eval_config
a15d823
unverified

winglian commited on

Add eval_batch_size for evaluation
0e74b64

Nanobit commited on

fix log sweep lr
a10a826

winglian commited on

support for multi line inference input, log sweep over learning rates
9105935

winglian commited on

fix adam bnb optimizer grouped parameters, fix peft model 8bit conversion logic, black formatting
7748f3d

winglian commited on

install peft from main branch
fe9c29d

winglian commited on

support llama-adapter zero init attention
2255bb7

winglian commited on

use prebuilt wheels for flash-attn and deepspeed
55baef0

winglian commited on

fdsp config dict fix, todo list, add torchdistx support
ad2b48c

winglian commited on

8bit and deepspeed changes
9190ada

winglian commited on

update ds_config
4dbef09

winglian commited on

don't load models in 8bit unless they are using an adapter, also fix tokenizer load in exceptional case
6dfdd2d

winglian commited on

fix fsdp training args
29936bb

winglian commited on

fix for zero value warmup steps
7882181

winglian commited on

fix sharegpt tokenization, refactor tokenization debugging
5159d00

winglian commited on

wire up gradient checkpointing for 4bit
c0f50d9

winglian commited on

Merge pull request #9 from winglian/dev
4e705ed
unverified

winglian commited on

fix dataset handling, support galactica
4a17a4c

winglian commited on

tweaks to data loading, 8 bit adam, accelerate and deepspeed
097d367

winglian commited on

shuffle and split dataset after save/load
4f2584f

winglian commited on

fix sharegpt handling from hf, don't worry about loading llama if using earlier transformers release
8d43785

winglian commited on

stablelm support
8e2a560

winglian commited on

various bugfixes
94f5e41

winglian commited on

ignore config, add python 3.9 (#8)
2624bc2
unverified

ehartford commited on

fix bug when model_type not explicitly passed
bb991fd

winglian commited on

improve inference
d653859

winglian commited on

fix runpod script
5749eb0

winglian commited on

cleanup empty lines, tweak env for runpod setup
7753cde

winglian commited on

handle empty lines
f50de1b

winglian commited on

quickstart instructions for starting from runpod (#5)
0a472e1
unverified

winglian commited on

update readme w compat matrix
5cb7ea4

winglian commited on

attempt xformers hijack attention
8746b70

winglian commited on

WIP large refactor to make finetune script a little more manageable (#3)
6045345
unverified

winglian commited on

add support for alpaca reflect training (#2)
81de0ef
unverified

winglian commited on

update readme
34af1b4

winglian commited on

Tokenization open assistant (#1)
87d7825
unverified

winglian commited on

fix llama check
eb80890

winglian commited on

update readme
3f3f561

winglian commited on

fix conditional check to prevent always using 4bit
8f36f3c

winglian commited on

imrpove llama check and fix safetensors file check
69164da

winglian commited on

suppport for alpaca-like instruction datasets without inputs
e107643

winglian commited on

casts the prepared data to int16 (doesn't help with training memory)
2db9436

winglian commited on

bugfixes
120e7df

winglian commited on

fix lora target module, require explicit flash attention, fix min logging steps, don't use adam8bit for int4, hash prepared datasets, support hf hub datasets
87e073d

winglian commited on

fix install to work with latest alpaca lora 4bit
4131183

winglian commited on

4bit quantized support (wip)
77fca25

winglian commited on

cleanup, prep for 4bit quant support
12de7b7

winglian commited on

deepspeed doesn't work with flash-attn, and the gpu savings w flash attn are better than the deepspeed headaches
d1aed4c

winglian commited on

fix logging
a459383

winglian commited on