--- license: apache-2.0 language: - zh pipeline_tag: text-generation --- ---

📈 CFGPT: Chinese Financial Assistant with Large Language Model (CFGPT1-sft-7b-LoRA)

## Introduction We introduce **CFGPT**, an open-source language model trained by firstly further pretraining general LLMs on collected and cleaned Chinese finance text data (CFData-pt), including financial domain-specific data (announcement, finance articles, finance exams, finance news, finance research papers) and general data (Wikipedia), and secondly fine-tuning with knowledge-intensive instruction tuning data (CFData-sft). As for preliminary evaluation, we use CFBenchmark-Basic. CFGPT outperforms the baselines on objective and subjective tasks compared to several baseline models with similar parameters. In this repository, we will share the supervised finetuning LoRA model. - [Supervised Finetuned Model (Lora)](https://huggingface.co/TongjiFinLab/CFGPT1-sft-7B-LoRA): Adapter model weights trained by PEFT (LoRA). ## How to Use **1. Prepare the code and the environment** Clone [CFGPT]() repository, create a Python environment, and activate it via the following command ```bash git clone https://github.com/TongjiFinLab/CFGPT.git cd CFGPT conda create -n env_name python=3.10 source activate env_name pip install -r requirements.txt ``` **2. Use CFGPT1-sft-7B-LoRA** ```python from transformers import AutoModel, AutoTokenizer from peft import PeftModel base_model = 'TongjiFinLab/CFGPT1-pt-7B' lora_weights = 'TongjiFinLab/CFGPT1-sft-7B-LoRA' device_map = 'cuda:0' tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True) model = AutoModel.from_pretrained( base_model, trust_remote_code=True, device_map=device_map, torch_dtype=torch.bfloat16 ) model = PeftModel.from_pretrained( model, lora_weights, device_map=device_map, ) model = model.eval() inputs = tokenizer("""你是一名金融从业者,请对这篇新闻进行情感分析。请从(中性、积极、消极)中选取答案。新闻内容:挖贝快讯:特步国际发布2023年第二季度中国内地业务营运状况,披露截至2023年6月30日止3个月零售销售实现高双位数同比增长(包括线上线下渠道),零售折扣水平约七五折。同时,2022年7月MSCI首次予以特步ESG评级,一年后评级表现即迎来提升。明晟MSCI上调特步ESG评级,由“BB”升至“BBB”。\n回答:""", return_tensors='pt').to('cuda:4') pred = model.generate(**inputs, max_new_tokens=64, do_sample=False, repetition_penalty=1.0) print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True).split('回答:')[1]) ``` ## 简介 **CFGPT**是一个开源的语言模型,首先通过在收集和清理的中国金融文本数据(CFData-pt)上进行继续预训练,包括金融领域特定数据(公告、金融文章、金融考试、金融新闻、金融研究论文)和通用数据(维基百科),然后使用知识密集的指导调整数据(CFData-sft)进行微调。 我们使用CFBenchmark-Basic进行初步评估。与几个具有相似参数的基线模型相比,CFGPT在识别,分类和生成任务上表现优越。 在这个仓库中,我们将分享以下LoRA有监督微调的模型。 - [Supervised Finetuned Model (Lora)](https://huggingface.co/TongjiFinLab/CFGPT1-sft-7B-LoRA): 基于我们继续预训练模型的由PEFT(LoRA)训练的适配器模型权重。 ## 如何使用 **1. 准备代码和环境** 克隆[CFGPT]()的仓库,创建一个Python环境,并通过以下命令激活它: ```bash git clone https://github.com/TongjiFinLab/CFGPT.git cd CFGPT conda create -n env_name python=3.10 source activate env_name pip install -r requirements.txt ``` **2. 使用 CFGPT1-sft-7B-LoRA** ```python from transformers import AutoModel, AutoTokenizer from peft import PeftModel base_model = 'TongjiFinLab/CFGPT1-pt-7B' lora_weights = 'TongjiFinLab/CFGPT1-sft-7B-LoRA' device_map = 'cuda:0' tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True) model = AutoModel.from_pretrained( base_model, trust_remote_code=True, device_map=device_map, torch_dtype=torch.bfloat16 ) model = PeftModel.from_pretrained( model, lora_weights, device_map=device_map, ) model = model.eval() inputs = tokenizer("""你是一名金融从业者,请对这篇新闻进行情感分析。请从(中性、积极、消极)中选取答案。新闻内容:挖贝快讯:特步国际发布2023年第二季度中国内地业务营运状况,披露截至2023年6月30日止3个月零售销售实现高双位数同比增长(包括线上线下渠道),零售折扣水平约七五折。同时,2022年7月MSCI首次予以特步ESG评级,一年后评级表现即迎来提升。明晟MSCI上调特步ESG评级,由“BB”升至“BBB”。\n回答:""", return_tensors='pt').to(device_map) pred = model.generate(**inputs, max_new_tokens=64, do_sample=False, repetition_penalty=1.0) print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True).split('回答:')[1]) ```