AISquare-Instruct-llama2-koen-13b-v0.9.24
Model Details
Developed by Inswave Systems UI Platform Team
Method
Using DPO method and SFT method
Hardware
We utilized an A100x4 * 1 for training our model
Base Model beomi/llama2-koen-13b
Implementation Code
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "inswave/AISquare-Instruct-llama2-koen-13b-v0.9.24"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
- Downloads last month
- 2,374
Inference API (serverless) is not available, repository is disabled.