--- base_model: - WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0 - MrRobotoAI/llama3-8B-Special-Dark-v2.0 - Undi95/Llama-3-Unholy-8B - VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct - UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3 - turboderp/llama3-turbcat-instruct-8b - Undi95/Llama-3-LewdPlay-8B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0) * [MrRobotoAI/llama3-8B-Special-Dark-v2.0](https://huggingface.co/MrRobotoAI/llama3-8B-Special-Dark-v2.0) * [Undi95/Llama-3-Unholy-8B](https://huggingface.co/Undi95/Llama-3-Unholy-8B) * [VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct) * [UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3) * [turboderp/llama3-turbcat-instruct-8b](https://huggingface.co/turboderp/llama3-turbcat-instruct-8b) * [Undi95/Llama-3-LewdPlay-8B](https://huggingface.co/Undi95/Llama-3-LewdPlay-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: turboderp/llama3-turbcat-instruct-8b parameters: weight: 1.0 - model: MrRobotoAI/llama3-8B-Special-Dark-v2.0 parameters: weight: 1.0 - model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3 parameters: weight: 1.0 - model: Undi95/Llama-3-LewdPlay-8B parameters: weight: 1.0 - model: Undi95/Llama-3-Unholy-8B parameters: weight: 1.0 - model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct parameters: weight: 1.0 - model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0 parameters: weight: 1.0 merge_method: linear dtype: float16 ```