angeloc1's picture
Update README.md
567ec12 verified
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
- int4
- BPLLM
library_name: transformers
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Fine-tuned Llama 3.1 8B PEFT int4 for Food Delivery and E-commerce
This model was trained for the experiments carried out in the research paper "Conversing with business process-aware Large Language Models: the BPLLM framework".
It comprises a version of the Llama 3.1 8B model fine-tuned (PEFT with quantization int4) to operate within the context of the Food Delivery and E-commerce process models (similar in terms of activities and events) introduced in the article.
Further insights can be found in our paper "[Conversing with business process-aware Large Language Models: the BPLLM framework](https://doi.org/10.21203/rs.3.rs-4125790/v1)".
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```