Edit model card
KalpSnuti AI

KalpSnuti


Llama 2 13B Chat - GGUF

Original model: Llama 2 13B Chat
Model creator: Meta Llama 2

Description

Meta's Llama 2 13B Chat LLM in GGUF file format called ggml-model-q5km.gguf is made available in this repository.

About GGUF

GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenization, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is the list of some clients and libraries that are known to support GGUF:

  • llama.cpp. The source project for GGUF. Offers a CLI and a server option.
  • text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
  • KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
  • LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
  • LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.
  • Faraday.dev, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
  • ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
  • llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
  • candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.

Prompt

[INST] <<SYS>>
As an AI assistant, you inhabit the persona of a female named Ragini. You embody attributes of respect, honesty, and helpfulness in all of your interactions. It is paramount that your responses never align with harmful, unethical, racial, sexist, toxic, perilous, or illicit content. Uphold an unprejudiced and optimistic stance while ensuring your discourse acknowledges and respects social variations.
In circumstances where the proposed query is incongruous or lacking factual coherence, clarify the misunderstanding instead of venturing into incorrect answers. Maintain integrity by refraining from disseminating false information when faced with unfamiliar queries. Your main purpose is to provide trusted and accurate assistance in all interactions.
<</SYS>>
{prompt}[/INST]

Compatibility

These quantised GGUFv2 files are compatible with llama.cpp commit, 248672568220ed6a780afd681c1e22f835b1f5a5 on Sep 30th, onwards.
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.

Explanation of quantisation methods

GGML_TYPE_Q5_K - "type-1" 5-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 5.5 bpw.

Models

Name Quant method Bits Size Max RAM required Use case
ggml-model-q5km.gguf Q5_K_M 5 8.6 GB 11.73 GB large, very low quality loss

Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.

Downloading the GGUF file(s)

using manual download

To simplify the process, the following clients / libraries will automatically retrieve models for you and present a selection of available options:

  • LM Studio
  • LoLLMS Web UI
  • Faraday.dev

Attention manual downloaders: Avoid cloning the entire repository in most cases! Instead, select and download a specific file as needed.

using the text-generation-webui

To download a specific file from the model repository, follow these steps:
Enter the model repository: kalpsnuti/llama-213-chat-gguf.
Provide the desired filename for download, for example: ggml-model-q5km.gguf.
Click on the "Download" button.

using the command line via huggingface-hub

pip3 install huggingface-hub >= 0.17.1
for the high speed download of any individual model file to the current directory
huggingface-cli download kalpsnuti/llama-213-chat-gguf ggml-model-q5km.gguf --local-dir . --local-dir-use-symlinks False

huggingface.co/docs => Hub Python Library => HOW-TO GUIDES => Download files has full documentation on downloading with huggingface-cli.

#downloads on fast connections (1Gbit/s or higher)
pip3 install hf_transfer
...first set the environment variable HF_HUB_ENABLE_HF_TRANSFER to 1:
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download kalpsnuti/llama-213-chat-gguf ggml-model-q5km.gguf --local-dir . --local-dir-use-symlinks False

Windows CLI users, please use set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 before running the download command.

Running the model

with llama.cpp command line

Please use llama.cpp from commit 248672568220ed6a780afd681c1e22f835b1f5a5 or later.
Clone and cd to the llama.cpp directory, set the parameters as appropriate, replace the {prompt} with your query & fire the below command.

./main -ngl 32 -m models/ggml-model-q5km.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nAs an AI assistant, you inhabit the persona of a female named Ragini. You embody attributes of respect, honesty, and helpfulness in all of your interactions. It is paramount that your responses never align with harmful, unethical, racial, sexist, toxic, perilous, or illicit content. Uphold an unprejudiced and optimistic stance while ensuring your discourse acknowledges and respects social variations.In circumstances where the proposed query is incongruous or lacking factual coherence, clarify the misunderstanding instead of venturing into incorrect answers. Maintain integrity by refraining from disseminating false information when faced with unfamiliar queries. Your main purpose is to provide trusted and accurate assistance in all interactions.\n<</SYS>>\n{prompt}[/INST]"
first run screenshot...

How are you today?
Options - set as appropriate
-ngl 32 indicates 32 layers to offload to GPU. Remove if GPU acceleration is not available.
-c 4096 indicates 4k context length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
-p <PROMPT> indicates the conversation style, change to -i or --interactive to interact by giving <PROMPT> in chat style.

The llama.cpp documentation has detailed information on the above & other model running parameters.

Thanks

Thanks TheBlokeAI team for inspirations!

Llama 2 13B Chat (original model card by Meta)

Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.

Model Details

Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the website and accept our License before requesting access here. Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.

Model Developers Meta
Variations Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
Input Models input text only.
Output Models generate text only.
Model Architecture Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.

Training Data Params Content Length GQA Tokens LR
Llama 2 A new mix of publicly available online data 7B 4k 2.0T 3.0 x 10-4
Llama 2 A new mix of publicly available online data 13B 4k 2.0T 3.0 x 10-4
Llama 2 A new mix of publicly available online data 70B 4k 2.0T 1.5 x 10-4

Llama 2 family of models. Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
Model Dates Llama 2 was trained between January 2023 and July 2023.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
License A custom commercial license is available at: https://ai.meta.com/resources/models-and-libraries/llama-downloads/
Research Paper "Llama-2: Open Foundation and Fine-tuned Chat Models"

Intended Use

Intended Use Cases Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the INST and <<SYS>> tags, BOS and EOS tokens, and the whitespaces and breaklines in between (we recommend calling strip() on inputs to avoid double-spaces). See our reference code in github for details: chat_completion.
Out-of-scope Uses Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.

Hardware and Software

Training Factors We used custom training libraries, Meta's Research Super Cluster, and production clusters for pre-training. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
Carbon Footprint Pre-training utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.

Time (GPU hours) Power Consumption (W) Carbon Emitted(tCO2eq)
Llama 2 7B 184320 400 31.22
Llama 2 13B 368640 400 62.44
Llama 2 70B 1720320 400 291.42
Total 3311616 539.00

CO2 emissions during pre-training. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pre-training costs do not need to be incurred by others.

Training Data

Overview Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pre-training nor the fine-tuning datasets include Meta user data.
Data Freshness The pre-training data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.

Evaluation Results

In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.

Model Size Code Commonsense Reasoning World Knowledge Reading Comprehension Math MMLU BBH AGI Eval
Llama 1 7B 14.1 60.8 46.2 58.5 6.95 35.1 30.3 23.9
Llama 1 13B 18.9 66.1 52.6 62.3 10.9 46.9 37.0 33.9
Llama 1 33B 26.0 70.0 58.4 67.6 21.4 57.8 39.8 41.7
Llama 1 65B 30.7 70.7 60.5 68.6 30.8 63.4 43.5 47.6
Llama 2 7B 16.8 63.9 48.9 61.3 14.6 45.3 32.6 29.3
Llama 2 13B 24.5 66.9 55.4 65.8 28.7 54.8 39.4 39.1
Llama 2 70B 37.5 71.9 63.6 69.4 35.2 68.9 51.2 54.2

Overall performance on grouped academic benchmarks. Code: We report the average pass@1 scores of our models on HumanEval and MBPP. Commonsense Reasoning: We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. World Knowledge: We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. Reading Comprehension: For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. MATH: We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.

TruthfulQA Toxigen
Llama 1 7B 27.42 23.00
Llama 1 13B 41.74 23.08
Llama 1 33B 44.19 22.57
Llama 1 65B 48.71 21.77
Llama 2 7B 33.29 21.25
Llama 2 13B 41.86 26.10
Llama 2 70B 50.18 24.60

Evaluation of pretrained LLMs on automatic safety benchmarks. For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).

TruthfulQA Toxigen
Llama-2-Chat 7B 57.04 0.00
Llama-2-Chat 13B 62.18 0.00
Llama-2-Chat 70B 64.14 0.01

Evaluation of fine-tuned LLMs on different safety datasets. Same metric definitions as above.

Ethical Considerations and Limitations

Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/

Reporting Issues

Please report any software “bug,” or other problems with the models through one of the following means:

Llama Model Index

Model Llama2 Llama2-hf Llama2-chat Llama2-chat-hf
7B Link Link Link Link
13B Link Link Link Link
70B Link Link Link Link
Downloads last month
5
GGUF
Model size
13B params
Architecture
llama
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for kalpsnuti/llama-213-chat-gguf

Quantized
this model