license: mit | |
datasets: | |
- nlpai-lab/kullm-v2 | |
language: | |
- ko | |
- en | |
library_name: mlc | |
tags: | |
- llama-2-ko | |
## q4f16_1 converted model(4bit Quantized) from [Llama-2-ko-7b-Chat](https://huggingface.co/kfkas/Llama-2-ko-7b-Chat) | |
This repository includes 4bit quantized model with MLC-LLM, with the weights from [kfkas/Llama-2-ko-7b-Chat](https://huggingface.co/kfkas/Llama-2-ko-7b-Chat). |