Commit History

optionally define whether to use_fast tokenizer
47d601f

winglian commited on

add float16 docs and tweak typehints
88e17ff

winglian commited on

style correction
136522f

maciej.karasek commited on

issue #205 bugfix
556fe40

maciej.karasek commited on

Merge branch 'main' into flash-optimum
fd2c981
unverified

winglian commited on

Merge pull request #187 from OpenAccess-AI-Collective/strip-peft-device-map
93dacba
unverified

winglian commited on

Merge pull request #177 from NanoCode012/fix/landmark-patch
8002ffb
unverified

winglian commited on

Merge branch 'main' into strip-peft-device-map
5e616d9
unverified

winglian commited on

Merge pull request #159 from AngainorDev/patch-1
8e568bb
unverified

Nanobit commited on

add check for attr
c9a149f

winglian commited on

Fix strict and Lint
b565ecf

Angainor commited on

match up gradient checkpointing when using lora w config
fe0b768

winglian commited on

Fix undefined LlamaForCausalLM and del try except
563b6d8

Nanobit commited on

peft no longer needs device_map
cd0a6f6

winglian commited on

Refactor landmark attention patch
919727b

Nanobit commited on

Fix missing cfg.
a808bf9
unverified

Angainor Development commited on

Merge pull request #182 from OpenAccess-AI-Collective/fix-llama-ref
0124825
unverified

winglian commited on

more gpt-neox long ctx fixes
ab5cd28

winglian commited on

more tweaks to do pre-training with bettertransformers
1210dc8

winglian commited on

add support for opimum bettertransformers
1edc30c

winglian commited on

fix for local variable 'LlamaForCausalLM' referenced before assignment
14163c1

winglian commited on

Merge branch 'main' into patch-1
79e2a6f
unverified

Angainor Development commited on

add support to extend context with xpos rope
a03a7d7

winglian commited on

fix for max sequence len across different model types
7f09106

winglian commited on

Fix backward compat for peft
aefb2fc

Nanobit commited on

WIP: Rely on cfg.inference
813cfa4
unverified

Angainor Development commited on

Fix patching via import instead of hijacking
e44c9e0

Nanobit commited on

Feat: Add landmark attention
55b8542

Nanobit commited on

Fix future deprecate prepare_model_for_int8_training
df9528f

Nanobit commited on

Fix training over existing lora
193c73b
unverified

Angainor Development commited on

new prompters, misc fixes for output dir missing using fsdp, and changing max seq len
4ac9e25

winglian commited on

Merge pull request #124 from OpenAccess-AI-Collective/xformers-fix
2d0ba3b
unverified

winglian commited on

remove unused import and update readme
e3c494c

winglian commited on

copy xformers attn from ooba since we removed dep on alpaca_lora_4bit
6cb2310

winglian commited on

fix up tokenizer config, isort fix
39a208c

winglian commited on

split up llama model loading so config can be loaded from base config and models can be loaded from a path
2520ecd

winglian commited on

Fix incorrect rebase
594e72b

Nanobit commited on

fix relative path for fixtures
cfcc549

winglian commited on

Apply isort then black
37293dc

Nanobit commited on

Fix mypy typing
e9650d3

Nanobit commited on

Lint models.py
f4e5d86

Nanobit commited on

fix relative path for fixtures
e65aeed

winglian commited on

refactor: fix previous refactors
56f9ca5

Nanobit commited on

Refactor to use DictDefault instead
8bd7a49

Nanobit commited on

Convert attrdict to addict
bdfe7c9

Nanobit commited on

Merge pull request #67 from OpenAccess-AI-Collective/refactor-tokenizer-load
0d4a7f4
unverified

winglian commited on

Merge branch 'main' into refactor/rename-4b-to-gptq
147241c
unverified

winglian commited on

fix auto linear modules for lora w/o any set already
4c90633

winglian commited on

refactor(param): rename load_4bit config param by gptq
dd00657

Thytu commited on