Safetensors
English
Chinese
llama
MonteXiaofeng's picture
Update README.md
7bf6505 verified
metadata
license: apache-2.0
language:
  - en
  - zh
base_model:
  - meta-llama/Meta-Llama-3.1-8B-Instruct
datasets:
  - BAAI/IndustryInstruction_Finance-Economics
  - BAAI/IndustryInstruction

This model is finetuned on the model llama3.1-8b-instruct using the dataset BAAI/IndustryInstruction_Finance-Economics dataset, the dataset details can jump to the repo: BAAI/IndustryInstruction

training params

learning_rate=1e-5
lr_scheduler_type=cosine
max_length=2048
warmup_ratio=0.05
batch_size=64
epoch=10

select best ckpt by the evaluation loss

evaluation

The following is an evaluation on the FinerBen dataset metrci. Since there are too many samples in the dataset, I randomly selected 500 samples from each dataset for evaluation.

image/png

how to use


# !/usr/bin/env python
# -*- coding:utf-8 -*-
# ==================================================================
# [Author]       : xiaofeng
# [Descriptions] :
# ==================================================================

from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch


llama3_jinja = """{% if messages[0]['role'] == 'system' %}
    {% set offset = 1 %}
{% else %}
    {% set offset = 0 %}
{% endif %}

{{ bos_token }}
{% for message in messages %}
    {% if (message['role'] == 'user') != (loop.index0 % 2 == offset) %}
        {{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}
    {% endif %}

    {{ '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n' + message['content'] | trim + '<|eot_id|>' }}
{% endfor %}

{% if add_generation_prompt %}
    {{ '<|start_header_id|>' + 'assistant' + '<|end_header_id|>\n\n' }}
{% endif %}"""


dtype = torch.bfloat16

model_dir = "MonteXiaofeng/Finance-llama3_1_8B_instruct"
model = AutoModelForCausalLM.from_pretrained(
    model_dir,
    device_map="cuda",
    torch_dtype=dtype,
)

tokenizer = AutoTokenizer.from_pretrained(model_dir)
tokenizer.chat_template = llama3_jinja # update template

message = [
    {"role": "system", "content": "You are a helpful assistant"},
    {"role": "user", "content": "天气如何"},
]
prompt = tokenizer.apply_chat_template(
    message, tokenize=False, add_generation_prompt=True
)
print(prompt)
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
prompt_length = len(inputs[0])
print(f"prompt_length:{prompt_length}")

generating_args = {
    "do_sample": True,
    "temperature": 1.0,
    "top_p": 0.5,
    "top_k": 15,
    "max_new_tokens": 150,
}


generate_output = model.generate(input_ids=inputs.to(model.device), **generating_args)

response_ids = generate_output[:, prompt_length:]
response = tokenizer.batch_decode(
    response_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(response)
"""
天气如何?我可以为您提供最新的天气信息。请告诉我您所在的具体地点,我将为您查询天气情况。
"""