qwerrwe / tests

Commit History

Multipack simplify for Mixtral (#1142)
6910e6a
unverified

winglian commited on

Add shifted sparse attention (#973) [skip-ci]
1d70f24
unverified

jrc joecummings winglian commited on

Add `layers_to_transform` for `lora_config` (#1118)
8487b97
unverified

xzuyn commited on

Enable or disable bf16 support based on availability (#1116)
0865613
unverified

Simon Hällqvist commited on

keep gate in fp32 for 16 bit loras (#1105)
da97285
unverified

winglian commited on

add gptneox embeddings, fix phi2 inputs, also fix the casting (#1083)
78c5b19
unverified

winglian commited on

update sharegpt conversations when chatml chat template is set (#1075) [skip ci]
0ce1a65
unverified

winglian commited on

fix: `train_on_inputs: true` ignored for sharegpt (#1045) [skip ci]
043c386
unverified

Nanobit winglian commited on

be more robust about checking embedding modules for lora finetunes (#1074) [skip ci]
0f10080
unverified

winglian commited on

attempt to also run e2e tests that needs gpus (#1070)
788649f
unverified

winglian commited on

fix double eos token for chatml (#1054) [skip ci]
651b7a3
unverified

winglian commited on

Phi2 rewrite (#1058)
732851f
unverified

winglian commited on

streaming multipack for pretraining dataset (#959)
553c80f
unverified

jinwonkim93 jinwonkim93@github.com winglian commited on

RL/DPO (#935)
f243c21

winglian commited on

bump transformers and update attention class map name (#1023)
bcc78d8
unverified

winglian commited on

Feat: Warns to add to modules_to_save when adding tokens or switching special_tokens (#787)
1ffa386
unverified

Nanobit commited on

fix mistral prompt assembly (#982)
7bbaac9
unverified

hamel commited on

Fix prompt assembly for llama (#952)
5ada140
unverified

hamel tokestermw commited on

Respect sequence_len in config for `type: llama2_chat` (#926)
f1de29d
unverified

hamel commited on

support for mamba (#915)
40a6362
unverified

winglian commited on

Feat(wandb): Refactor to be more flexible (#767)
a1da39c
unverified

Nanobit commited on

Feat: Add warmup_ratio (#893)
fb12895
unverified

Nanobit commited on

Phi update 202311 (#876)
9bf854e
unverified

winglian commited on

add e2e tests for checking functionality of resume from checkpoint (#865)
b3a61e8
unverified

winglian commited on

use temp_dir kwarg instead
6dc68a6

winglian commited on

missing dunder-init
7de6a56

winglian commited on

chore: lint
c74f045

winglian commited on

make sure to cleanup tmp output_dir for e2e tests
0402d19

winglian commited on

simplify by removing duplicate base_model_config (#772)
2d8def6
unverified

winglian commited on

Fix: Warn when fullfinetune without adapter (#770)
44c9d01
unverified

Nanobit commited on

convert exponential notation lr to floats (#771)
ca84cca
unverified

winglian commited on

Fix: eval table conflict with eval_sample_packing (#769)
9923b72
unverified

Nanobit commited on

remove lora fused packing test (#758)
21cf09b
unverified

winglian commited on

misc sharegpt fixes (#723)
f30afe4
unverified

winglian commited on

Feat: Allow usage of native Mistral FA when no sample_packing (#669)
697c50d
unverified

Nanobit commited on

add mistral e2e tests (#649)
5b0bc48
unverified

winglian commited on

Fix(cfg): Add validation for save_strategy and eval_strategy (#633)
383f88d
unverified

Nanobit commited on

use fastchat conversations template (#578)
e7d3e2d
unverified

winglian commited on

Fix: Fail bf16 check when running on cpu during merge (#631)
cfbce02
unverified

Nanobit commited on

better handling and logging of empty sharegpt turns (#603)
a363604
unverified

winglian commited on

misc fixes to add gptq tests (#621)
03e5907
unverified

winglian commited on

Support Sample packing for phi arch (#586)
12a2dbb
unverified

winglian commited on

E2e device cuda (#575)
2414673
unverified

winglian commited on

e2e testing (#574)
9218ebe
unverified

winglian commited on

Fix pretraining with iterable/streaming Dataset (#556)
2f586d1
unverified

Jan Philipp Harries Jan Philipp Harries commited on

workaround for md5 variations (#533)
0b4cf5b
unverified

winglian commited on

recommend padding when using sample packing (#531)
3437149
unverified

winglian commited on

fix test fixture b/c hf trainer tokenization changed (#464)
d5dcf9c
unverified

winglian commited on

fix fixture for new tokenizer handling in transformers (#428)
8cace80
unverified

winglian commited on