mlabonne's picture
Update README.md
4303ff3 verified
metadata
base_model:
  - NousResearch/Hermes-3-Llama-3.1-70B
  - mlabonne/Llama-3-70B-Instruct-abliterated-LORA
library_name: transformers
tags:
  - mergekit
  - merge

🪽 Hermes-3-Llama-3.1-70B-lorablated

image/png

8B version: mlabonne/Hermes-3-Llama-3.1-8B-lorablated

This is an uncensored version of NousResearch/Hermes-3-Llama-3.1-70B using lorablation.

You can see in the following example how Hermes 3 refuses to answer a legitimate question while the abliterated model complies:

image/png

The recipe is based on @grimjim's grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter (special thanks):

  1. Extraction: We extract a LoRA adapter by comparing two models: a censored Llama 3 (meta-llama/Meta-Llama-3-70B-Instruct) and an abliterated Llama 3.1 (failspy/Meta-Llama-3.1-70B-Instruct-abliterated).
  2. Merge: We merge this new LoRA adapter using task arithmetic to the censored NousResearch/Hermes-3-Llama-3.1-70B to abliterate it.

image/png

See this article to learn more about abliteration.

⚡ Quantization

🧩 Configuration

This model was merged using the task arithmetic merge method using NousResearch/Hermes-3-Llama-3.1-70B + Llama-3.1-70B-Instruct-abliterated-LORA as a base.

The following YAML configuration was used to produce this model:

base_model: NousResearch/Hermes-3-Llama-3.1-70B+mlabonne/Llama-3.1-70B-Instruct-abliterated-LORA
dtype: bfloat16
merge_method: task_arithmetic
parameters:
  normalize: false
slices:
- sources:
  - layer_range: [0, 32]
    model: NousResearch/Hermes-3-Llama-3.1-70B+mlabonne/Llama-3.1-70B-Instruct-abliterated-LORA
    parameters:
      weight: 1.0

You can reproduce this model using the following commands:

# Setup
git clone https://github.com/arcee-ai/mergekit.git
cd mergekit && pip install -e .
pip install bitsandbytes

# Merge using previous config
mergekit-yaml config.yaml Hermes-3-Llama-3.1-70B-lorablated --allow-crimes --lora-merge-cache=./cache