mpt-7b / attention.py

Commit History

directly use bool instead of torch.float16 to avoid crash in ASIC like HPU which does not support float16
72fc4ea

sywangyi commited on

add explicit cast where running without autocast causes issues (#60)
e837ad7

daking vchiley commited on

LLM Foundry Updates 06-01-2023 (#47)
68e1a8e

abhi-mosaic commited on

Upload folder using huggingface_hub
c5ccdb7

daking commited on