Commit History

remove unnecessary local variable
0c96727

tmm1 commited on

simplify `load_tokenizer`
efb3b2c

tmm1 commited on

improve GPU logging to break out pytorch cache and system mem
7b55fe6

tmm1 commited on

quiet noise from llama tokenizer by setting pad token earlier
e029ab3

tmm1 commited on

extract module for working with cfg
8cec513

tmm1 commited on

fix DefaultDict.__or__
a13e45d

tmm1 commited on

revert previous change and build ax images w docker on gpu (#371)
918f1b0
unverified

winglian commited on

attempt to run non-base docker builds on regular cpu hosts (#369)
c3fde36
unverified

winglian commited on

Attention mask and position id fixes for packing (#285)
2bb0b78
unverified

winglian commited on

Fix(save): Save as safetensors (#363)
a276c9c
unverified

Nanobit commited on

Add wandb_entity to wandb options, update example configs, update README (#361)
7019509
unverified

Morgan McGuire Morgan McGuire winglian commited on

Fix(model loading): Warn when model revision is passed to gptq (#364)
96bd6ae
unverified

Nanobit commited on

Fix(message): Improve error message for bad format (#365)
e37d935
unverified

Nanobit commited on

Feat: Add rope scaling (#343)
b521206
unverified

Nanobit commited on

feat(merge): save tokenizer on merge (#362)
289d5c4
unverified

Nanobit commited on

Merge pull request #355 from tmm1/bitsandbytes-fixes
35c8b90
unverified

tmm1 commited on

Update README.md on pretraining_dataset (#360)
fae6ed8
unverified

Nanobit commited on

Clarify pre-tokenize before multigpu (#359)
94d03c8
unverified

Nanobit commited on

Merge pull request #356 from tmm1/load_model-args
11ddccb
unverified

tmm1 commited on

Merge pull request #354 from tmm1/gpu-util
9643121
unverified

tmm1 commited on

simplify load_model signature
7181022

tmm1 commited on

Merge pull request #350 from tmm1/group-len-false-examples
f5c11f8
unverified

tmm1 commited on

bump to latest bitsandbytes release with major bug fixes
fce40aa

tmm1 commited on

use newer pynvml package
9c31410

tmm1 commited on

log GPU memory usage
e303d64

tmm1 commited on

note pattern when using groups
b4d1d22

tmm1 commited on

update comment for group_by_length
9f99104

tmm1 commited on

set group_by_length to false in examples
36fefcf

tmm1 commited on

ensure enable_input_require_grads is called on model before getting the peft model (#345)
176b888
unverified

winglian commited on

experimental llama 2 chat support (#296)
3392270
unverified

Jan Philipp Harries Jan Philipp Harries commited on

add a basic ds zero3 config (#347)
bb53a16
unverified

winglian commited on

Update XFormers Attention Monkeypatch to handle Llama-2 70B (GQA) (#339)
10405b9
unverified

ssmi153 commited on

Added Orca Mini prompt strategy (#263)
c93655c
unverified

Jan Philipp Harries Jan Philipp Harries commited on

optimize the iteration when tokenizeing large datasets (#332)
fe28543
unverified

winglian commited on

Merge pull request #336 from tmm1/flash-attn
0d2e34f
unverified

tmm1 commited on

Merge pull request #337 from tmm1/readme-fix
b56a6c0
unverified

tmm1 commited on

fix typo
2eda9e0

tmm1 commited on

scope flash-attn+qlora fix correctly, scope to llama, add comment
78b9efb

tmm1 commited on

move flash-attn monkey patch alongside the others
312a9fa

tmm1 commited on

python 3.10 and 3.11 both work fine, as does pytorch 2.1.0.dev
58d6659

tmm1 commited on

there is no configs folder
cc7e800

tmm1 commited on

feat/llama-2 examples (#319)
dc71d88
unverified

mhenrichsen Mads Henrichsen commited on

ensure flash-attn fixes happen in both adapter/lora modes, and use torch_dtype
248bf90

tmm1 commited on

qlora w flash attention fixes (#333)
77085ea
unverified

winglian commited on

add peft install back since it doesn't get installed by setup.py (#331)
db2a358
unverified

winglian commited on

pin accelerate so it works with llama2 (#330)
6c9a87c
unverified

winglian commited on

fix FSDP save of final model (#329)
894cba0
unverified

winglian commited on

update README for updated docker images (#328)
41a4d15
unverified

winglian commited on

Prune cuda117 (#327)
2c37bf6
unverified

winglian commited on

latest HEAD of accelerate causes 0 loss immediately w FSDP (#321)
9f69c4d
unverified

winglian commited on