qwerrwe / examples

Commit History

Phi2 rewrite (#1058)
732851f
unverified

winglian commited on

streaming multipack for pretraining dataset (#959)
553c80f
unverified

jinwonkim93 jinwonkim93@github.com winglian commited on

fix: lint (#1037)
8ba27f3
unverified

Nanobit commited on

added tiny llama examples for lora and qlora (#1027)
c75f916
unverified

Tim Dolan commited on

Set eval_sample_packing to false in mistral config.yaml (#1003)
384b817
unverified

Kevin Sydney commited on

Add an example config for finetuning a 34B model on a 24GB GPU (#1000)
6ef46f8
unverified

Evan Griffiths commited on

set output_router_logits for mixtral config: (#995)
628b754
unverified

winglian commited on

change val size (#992)
93ebec1
unverified

mhenrichsen commited on

Fix Deepspeed loading (#950)
5ea3aa3
unverified

winglian commited on

new evals_per_epoch and saves_per_epoch to make things cleaner (#944)
5f79b82
unverified

winglian commited on

Mixtral official (#942)
7fabc4d
unverified

winglian commited on

update to latest transformers for mixstral support (#929)
35f9b0f
unverified

winglian commited on

Mixtral multipack (#928)
68b227a
unverified

winglian commited on

support for mamba (#915)
40a6362
unverified

winglian commited on

Feat(wandb): Refactor to be more flexible (#767)
a1da39c
unverified

Nanobit commited on

feature: loss watchdog for terminating training runs that are failing (#899)
58ec8b1
unverified

user735 Karl-Johan Alm commited on

fix: remove FA for qwen examples (#900)
a48dbf6
unverified

Nanobit commited on

Feat: Add Qwen (#894)
1115c50
unverified

Nanobit commited on

Phi update 202311 (#876)
9bf854e
unverified

winglian commited on

various bugfixes (#856)
1470650
unverified

winglian commited on

don't compile deepspeed or bitsandbytes from source (#837)
f544ab2
unverified

winglian commited on

fix eval_steps to be a sane default (#797)
8b79ff0
unverified

winglian commited on

disable eval table w sample packing in examples (#778)
9b43e7e
unverified

winglian commited on

simplify by removing duplicate base_model_config (#772)
2d8def6
unverified

winglian commited on

Fix: lowercase `True` values in config (#713)
ace70b3
unverified

atgctg commited on

Get qlora mistral-7b fine tuning working on a single 4090 (#708)
295b266
unverified

lukemarsden commited on

fix unneeded space (#699)
f91db19
unverified

mhenrichsen commited on

lint
83a950b
unverified

mhenrichsen commited on

new lr, sample pack
4c8ddf2

mhenrichsen commited on

Fix: Higher vram usage for mistral and sample_packing (#691)
669f1d0
unverified

Nanobit commited on

Adding qlora config for Mistral (#675)
d4a88e4
unverified

Abhishek Mishra commited on

prepared dataset caching, other misc fixes (#665)
e50a64e
unverified

winglian commited on

Update mistral/README.md (#647)
b88f515
unverified

Adarsh Shirawalmath commited on

Feat: Add example for Mistral (#644)
eb41f76
unverified

Nanobit commited on

eval_table isn't quite stable enough to be in default llama configs (#637)
d887ad8
unverified

winglian commited on

Feat: Add support for upstream FA2 (#626)
19a600a
unverified

Nanobit commited on

default model changed
4fecbfe

mhenrichsen commited on

support to disable exllama for gptq (#604)
faecff9
unverified

winglian commited on

more sane defaults for openllama 3b used for quickstarts (#602)
674c576
unverified

winglian commited on

btlm and falcon monkey patches for flash attn (#566)
6b9b229
unverified

winglian commited on

make phi training work with Loras (#588)
62eaee7
unverified

winglian commited on

Support Sample packing for phi arch (#586)
12a2dbb
unverified

winglian commited on

Fix Codellama examples (#582)
1aa4007
unverified

Doan Minh Phuong commited on

Phi examples (#569)
2284209
unverified

winglian commited on

Add training callback to send predictions to WandB table (#521)
5b67ea9
unverified

Glavin001 commited on

recommend padding when using sample packing (#531)
3437149
unverified

winglian commited on

Add support for GPTQ using native transformers/peft (#468)
3355706
unverified

winglian commited on

pad_to_worst_case_seq_len boolean, for testing memory limits (#498)
8e197f6
unverified

Birch-san tmm1 commited on

Feat(cfg): Add code-llama configs for all sizes (#479)
3513071
unverified

mhenrichsen mhenrichsen commited on