qwp4w3hyb commited on
Commit
6914339
1 Parent(s): b1486a2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -23,9 +23,8 @@ base_model: meta-llama/Meta-Llama-3.1-70B-Instruct
23
 
24
  # Quant Infos
25
 
26
- - Requires latest master + [Rope Scaling PR](https://github.com/ggerganov/llama.cpp/pull/8676)
27
  - [@ubergarm](https://huggingface.co/ubergarm) explained how to set up your llama.cpp [here](https://huggingface.co/qwp4w3hyb/Meta-Llama-3.1-8B-Instruct-iMat-GGUF/discussions/1#66a26b63de4e162dd84c22c5)
28
- - Might not be perfect yet, but seems to mostly work, including 128k context.
29
  - quants done with an importance matrix for improved quantization loss
30
  - Quantized ggufs & imatrix from hf bf16, through bf16. `safetensors bf16 -> gguf bf16 -> quant` for *optimal* quant loss.
31
  - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
 
23
 
24
  # Quant Infos
25
 
26
+ - ~Requires latest master + [Rope Scaling PR](https://github.com/ggerganov/llama.cpp/pull/8676).~ Rope scaling is merged, so just a a recent master is required now.
27
  - [@ubergarm](https://huggingface.co/ubergarm) explained how to set up your llama.cpp [here](https://huggingface.co/qwp4w3hyb/Meta-Llama-3.1-8B-Instruct-iMat-GGUF/discussions/1#66a26b63de4e162dd84c22c5)
 
28
  - quants done with an importance matrix for improved quantization loss
29
  - Quantized ggufs & imatrix from hf bf16, through bf16. `safetensors bf16 -> gguf bf16 -> quant` for *optimal* quant loss.
30
  - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S