qwerrwe / configs /llama_7B_alpaca.yml

Commit History

fix lora target module, require explicit flash attention, fix min logging steps, don't use adam8bit for int4, hash prepared datasets, support hf hub datasets
87e073d

winglian commited on

4bit quantized support (wip)
77fca25

winglian commited on

deepspeed doesn't work with flash-attn, and the gpu savings w flash attn are better than the deepspeed headaches
d1aed4c

winglian commited on

add llama 7b config and fiz lora_fan_in_fan_out for llama (copy pasta bug)
d060c80

winglian commited on