--- library_name: transformers tags: - mistral - quantized - text-generation-inference - rp - roleplay - uncensored pipeline_tag: text-generation inference: false language: - en --- # **GGUF-Imatrix quantizations for [Test157t/Prima-LelantaclesV6-7b](https://huggingface.co/Test157t/Prima-LelantaclesV6-7b/).** # What does "Imatrix" mean? It stands for **Importance Matrix**, a technique used to improve the quality of quantized models. The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance. One of the benefits of using an Imatrix is that it can lead to better model performance, especially when the calibration data is diverse. More information: [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384). For --imatrix data, `imatrix-Prima-LelantaclesV6-7b-F16.dat` was used. `Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)` Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-[b2294](https://github.com/ggerganov/llama.cpp/releases/tag/b2294). The new **IQ3_S** quant-option has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.59.1` or higher. *If you want any specific quantization to be added, feel free to ask.* All credits belong to the [creator](https://huggingface.co/Test157t/). # Original model information: ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/pildYZ9hiswwLD4rBLt1A.jpeg) This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) ### Models Merged The following models were included in the merge: * [Test157t/West-Pasta-Lake-7b](https://huggingface.co/Test157t/West-Pasta-Lake-7b) + [Test157t/Lelantacles6-Experiment26-7B](https://huggingface.co/Test157t/Lelantacles6-Experiment26-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: dare_ties base_model: Test157t/Lelantacles6-Experiment26-7B parameters: normalize: true models: - model: Test157t/West-Pasta-Lake-7b parameters: weight: 1 - model: Test157t/Lelantacles6-Experiment26-7B parameters: weight: 1 dtype: float16 ```