qwerrwe / configs /llama_7B_alpaca.yml

Commit History

swap batch size for gradient accumulation steps to decouple from num gpu
c2a0792

winglian commited on

Update wandb_log_model on llama_7B_alpaca.yml
d77d736
unverified

Viktorius Suwandi commited on

fix lora target module, require explicit flash attention, fix min logging steps, don't use adam8bit for int4, hash prepared datasets, support hf hub datasets
87e073d

winglian commited on

4bit quantized support (wip)
77fca25

winglian commited on

deepspeed doesn't work with flash-attn, and the gpu savings w flash attn are better than the deepspeed headaches
d1aed4c

winglian commited on

add llama 7b config and fiz lora_fan_in_fan_out for llama (copy pasta bug)
d060c80

winglian commited on