Edit model card

Official AQLM quantization of meta-llama/Llama-2-7b-hf.

For this quantization, we used 1 codebook of 16 bits.

Selected evaluation results for this and other models:

Model AQLM scheme WikiText 2 PPL Model size, Gb Hub link
Llama-2-7b 1x16 5.92 2.4 Link
Llama-2-7b 2x8 6.69 2.2 Link
Llama-2-7b 8x8 6.61 2.2 Link
Llama-2-13b 1x16 5.22 4.1 Link
Llama-2-70b 1x16 3.83 18.8 Link
Llama-2-70b (THIS) 2x8 4.21 18.2 Link
Mixtral-8x7b 1x16 3.35 12.6 Link
Mixtral-8x7b-Instruct 1x16 - 12.6 Link

To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the official GitHub repo.

Downloads last month
9
Safetensors
Model size
17.6B params
Tensor type
FP16
·
I8
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Collection including ISTA-DASLab/Llama-2-70b-AQLM-2Bit-2x8-hf