koala-13B-GGML / README.md
TheBloke's picture
Update README.md
92c365a
|
raw
history blame
No virus
3.5 kB
metadata
license: other

Koala: A Dialogue Model for Academic Research

This repo contains the weights of the Koala 13B model produced at Berkeley. It is the result of combining the diffs from https://huggingface.co/young-geng/koala with the original Llama 13B model.

This version has then been quantized to 4-bit using GPTQ-for-LLaMa and then converted to GGML for use with llama.cpp.

Other Koala repos

I have also made these other Koala models available:

How to run in llama.cpp

I use the following command line; adjust for your tastes and needs:

./main -t 18 -m koala-13B-4bit-128g.GGML.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "BEGINNING OF CONVERSATION:
USER: <PROMPT GOES HERE>
GPT:"

Change -t 18 to the number of physical CPU cores you have. For example if your system has 8 cores, 16 threads, use -t 8.

You will require 16GB or more RAM to run this model without swapping.

How the Koala delta weights were merged

The Koala delta weights were originally merged using the following commands, producing koala-13B-HF:

git clone https://github.com/young-geng/EasyLM

git clone https://huggingface.co/TheBloke/llama-13b

mkdir koala_diffs && cd koala_diffs && wget https://huggingface.co/young-geng/koala/resolve/main/koala_13b_diff_v2

cd EasyLM

PYTHON_PATH="${PWD}:$PYTHONPATH" python \
-m EasyLM.models.llama.convert_torch_to_easylm \
--checkpoint_dir=/content/llama-13b \
--output_file=/content/llama-13b-LM \
--streaming=True

PYTHON_PATH="${PWD}:$PYTHONPATH" python \
-m EasyLM.scripts.diff_checkpoint --recover_diff=True \
--load_base_checkpoint='params::/content/llama-13b-LM' \
--load_target_checkpoint='params::/content/koala_diffs/koala_13b_diff_v2' \
--output_file=/content/koala_13b.diff.weights \
--streaming=True

PYTHON_PATH="${PWD}:$PYTHONPATH" python \
-m EasyLM.models.llama.convert_easylm_to_hf --model_size=13b \
--output_dir=/content/koala-13B-HF \
--load_checkpoint='params::/content/koala_13b.diff.weights' \
--tokenizer_path=/content/llama-13b/tokenizer.model

Further info

Check out the following links to learn more about the Berkeley Koala model.

License

The model weights are intended for academic research only, subject to the model License of LLaMA, Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited.