--- language: - ko - en license: llama3 library_name: transformers tags: - llama - llama-3 base_model: - meta-llama/Meta-Llama-3-8B-Instruct datasets: - MarkrAI/KoCommercial-Dataset --- # Waktaverse-Llama-3-KO-8B-Instruct Model Card ## Model Details ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/65d6e0640ff5bc0c9b69ddab/Va78DaYtPJU6xr4F6Ca4M.webp) Waktaverse-Llama-3-KO-8B-Instruct is a Korean language model developed by Waktaverse AI team. This large language model is a specialized version of the Meta-Llama-3-8B-Instruct, tailored for Korean natural language processing tasks. It is designed to handle a variety of complex instructions and generate coherent, contextually appropriate responses. - **Developed by:** Waktaverse AI - **Model type:** Large Language Model - **Language(s) (NLP):** Korean, English - **License:** [Llama3](https://llama.meta.com/llama3/license) - **Finetuned from model:** [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) - **Tokenizer Soucrce:** [saltlux/Ko-Llama3-Luxia-8B](https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B) ## Model Sources - **Repository:** [GitHub](https://github.com/PathFinderKR/Waktaverse-LLM/tree/main) - **Paper :** [More Information Needed] ## Uses ### Direct Use The model can be utilized directly for tasks such as text completion, summarization, and question answering without any fine-tuning. ### Out-of-Scope Use This model is not intended for use in scenarios that involve high-stakes decision-making including medical, legal, or safety-critical areas due to the potential risks of relying on automated decision-making. Moreover, any attempt to deploy the model in a manner that infringes upon privacy rights or facilitates biased decision-making is strongly discouraged. ## Bias, Risks, and Limitations While Waktaverse Llama 3 is a robust model, it shares common limitations associated with machine learning models including potential biases in training data, vulnerability to adversarial attacks, and unpredictable behavior under edge cases. There is also a risk of cultural and contextual misunderstanding, particularly when the model is applied to languages and contexts it was not specifically trained on. ## How to Get Started with the Model You can run conversational inference using the Transformers Auto classes. We highly recommend that you add Korean system prompt for better output. Adjust the hyperparameters as you need. ### Example Usage ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM device = ( "cuda:0" if torch.cuda.is_available() else # Nvidia GPU "mps" if torch.backends.mps.is_available() else # Apple Silicon GPU "cpu" ) model_id = "PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map=device, ) ################################################################################ # Generation parameters ################################################################################ num_return_sequences=1 max_new_tokens=1024 temperature=0.6 top_p=0.9 repetition_penalty=1.1 def prompt_template(system, user): return ( "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n" f"{system}<|eot_id|>" "<|start_header_id|>user<|end_header_id|>\n\n" f"{user}<|eot_id|>" "<|start_header_id|>assistant<|end_header_id|>\n\n" ) def generate_response(system ,user): prompt = prompt_template(system, user) input_ids = tokenizer.encode( prompt, add_special_tokens=False, return_tensors="pt" ).to(device) outputs = model.generate( input_ids=input_ids, pad_token_id=tokenizer.eos_token_id, num_return_sequences=num_return_sequences, max_new_tokens=max_new_tokens, do_sample=True, temperature=temperature, top_p=top_p, repetition_penalty=repetition_penalty ) return tokenizer.decode(outputs[0], skip_special_tokens=False) system_prompt = "다음 지시사항에 대한 응답을 작성해주세요." user_prompt = "피보나치 수열에 대해 설명해주세요." response = generate_response(system_prompt, user_prompt) print(response) ``` ### Example Output ```python <|begin_of_text|><|start_header_id|>system<|end_header_id|> 다음 지시사항에 대한 응답을 작성해주세요.<|eot_id|><|start_header_id|>user<|end_header_id|> 피보나치 수열에 대해 설명해주세요.<|eot_id|><|start_header_id|>assistant<|end_header_id|> 피보나치 수열은 0과 1로 시작하며, 각 항이 이전 두 항의 합으로 계산되는 수열입니다. 이 수열에는 무한히 많은 숫자가 포함되어 있으며, 첫 번째 몇 개의 항은 다음과 같습니다: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 985, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418,... 피보나치 수열은 수학적 구조와 재귀 관계를 가지고 있습니다. 예를 들어, 피보나치 수열의 n번째 항은 (n-1)번째와 (n-2)번째 항의 합입니다. 피보나치 수열은 여러 분야에서 사용됩니다. 예를 들어, 화폐의 배치 문제에서는 피보나치 수열이 사용될 수 있습니다. 또한, 기하학에서 피보나치 수열은 점의 배열에 사용될 수 있습니다. 피보나치 수열은 수학자 레온아르도 피보나치의 이름을 따서 명명되었습니다. 그는 이 수열을 처음 발견하고 기록했습니다. 피보나치 수열은 유럽에서 인기를 끌었으며, 다른 문화에서도 독특한 형태로 나타납니다. 피보나치 수열은 컴퓨터 프로그램과 알고리즘에도 적용될 수 있습니다. 예를 들어, 피보나치 수열을 계산하는 알고리즘이 있습니다. 이러한 알고리즘은 현재까지 매우 효율적이며, 대규모 계산에 사용됩니다. 피보나치 수열은 수학적 구조와 재귀 관계를 가지고 있기 때문에 프로그래밍 언어에서도 자주 사용됩니다. 요약하면, 피보나치 수열은 수학적 구조와 재귀 관계를 가진 수열로, 다양한 분야에서 사용되고 있습니다. 이 수열은 컴퓨터 프로그램과 알고리즘에도 적용될 수 있으며, 대규모 계산에 사용됩니다. 피보나치 수열은 수학자 레온아르도 피보나치의 이름을 따서 명명되었으며, 그의 업적으로 유명합니다.<|eot_id|> ``` ## Training Details ### Training Data The model is trained on the [MarkrAI/KoCommercial-Dataset](https://huggingface.co/datasets/MarkrAI/KoCommercial-Dataset), which consists of various commercial texts in Korean. ### Training Procedure The model training used LoRA for computational efficiency. 0.04 billion parameters(0.51% of total parameters) were trained. #### Training Hyperparameters ```python ################################################################################ # bitsandbytes parameters ################################################################################ load_in_4bit=True bnb_4bit_compute_dtype=torch.bfloat16 bnb_4bit_quant_type="nf4" bnb_4bit_use_double_quant=True ################################################################################ # LoRA parameters ################################################################################ task_type="CAUSAL_LM" target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"] r=8 lora_alpha=16 lora_dropout=0.1 bias="none" ################################################################################ # TrainingArguments parameters ################################################################################ num_train_epochs=2 per_device_train_batch_size=4 gradient_accumulation_steps=2 gradient_checkpointing=True learning_rate=2e-5 lr_scheduler_type="cosine" warmup_ratio=0.1 optim = "paged_adamw_8bit" weight_decay=0.01 ################################################################################ # SFT parameters ################################################################################ max_seq_length=4096 packing=False ``` ## Evaluation ### Metrics - **Ko-HellaSwag:** - **Ko-MMLU:** - **Ko-Arc:** - **Ko-Truthful QA:** - **Ko-CommonGen V2:** ### Results
Benchmark Waktaverse Llama 3 8B Llama 3 8B
Ko-HellaSwag: 0 0
Ko-MMLU: 0 0
Ko-Arc: 0 0
Ko-Truthful QA: 0 0
Ko-CommonGen V2: 0 0
## Technical Specifications ### Compute Infrastructure #### Hardware - **GPU:** NVIDIA GeForce RTX 4080 SUPER #### Software - **Operating System:** Linux - **Deep Learning Framework:** Hugging Face Transformers, PyTorch ### Training Details - **Training time:** 80 hours ## Citation **Waktaverse-Llama-3** ``` @article{waktaversellama3modelcard, title={Waktaverse Llama 3 Model Card}, author={AI@Waktaverse}, year={2024}, url = {https://huggingface.co/PathFinderKR/Waktaverse-Llama-3-KO-8B-Instruct} ``` **Llama-3** ``` @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ``` **Ko-Llama3-Luxia-8B** ``` @article{kollama3luxiamodelcard, title={Ko Llama 3 Luxia Model Card}, author={AILabs@Saltux}, year={2024}, url={https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B/blob/main/README.md} } ``` ## Model Card Authors [PathFinderKR](https://github.com/PathFinderKR)