Edit model card

Uploaded model

  • Developed by: tomasonjo
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3-8b-Instruct

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

For more information visit this link

Example usage:

Install dependencies. Check Unsloth documentation for specific installation for other environments.

%%capture
# Installs Unsloth, Xformers (Flash Attention) and all other packages!
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install --no-deps "xformers<0.0.26" trl peft accelerate bitsandbytes

Then you can load the model and use it as inference

from unsloth.chat_templates import get_chat_template

tokenizer = get_chat_template(
    tokenizer,
    chat_template = "llama-3", 
    map_eos_token = True, 
)

FastLanguageModel.for_inference(model) # Enable native 2x faster inference

schema = """Node properties: - **Question** - `favorites`: INTEGER Example: "0" - `answered`: BOOLEAN - `text`: STRING Example: "### This is: Bug ### Specifications OS: Win10" - `link`: STRING Example: "https://stackoverflow.com/questions/62224586/playg" - `createdAt`: DATE_TIME Min: 2020-06-05T16:57:19Z, Max: 2020-06-05T21:49:16Z - `title`: STRING Example: "Playground is not loading with apollo-server-lambd" - `id`: INTEGER Min: 62220505, Max: 62224586 - `upVotes`: INTEGER Example: "0" - `score`: INTEGER Example: "-1" - `downVotes`: INTEGER Example: "1" - **Tag** - `name`: STRING Example: "aws-lambda" - **User** - `image`: STRING Example: "https://lh3.googleusercontent.com/-NcFYSuXU0nk/AAA" - `link`: STRING Example: "https://stackoverflow.com/users/10251021/alexandre" - `id`: INTEGER Min: 751, Max: 13681006 - `reputation`: INTEGER Min: 1, Max: 420137 - `display_name`: STRING Example: "Alexandre Le" Relationship properties: The relationships: (:Question)-[:TAGGED]->(:Tag) (:User)-[:ASKED]->(:Question)"""
question = "Identify the top 5 questions with the most downVotes."

messages = [
      {"role": "system", "content": "Given an input question, convert it to a Cypher query. No pre-amble."},
      {"role": "user", "content": f"""Based on the Neo4j graph schema below, write a Cypher query that would answer the user's question:
{schema}

Question: {question}
Cypher query:"""}
]
inputs = tokenizer.apply_chat_template(
    messages,
    tokenize = True,
    add_generation_prompt = True, # Must add for generation
    return_tensors = "pt",
).to("cuda")

outputs = model.generate(input_ids = inputs, max_new_tokens = 128, use_cache = True)
tokenizer.batch_decode(outputs)
Downloads last month
865
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tomasonjo/text2cypher-demo-16bit

Finetuned
(46)
this model
Finetunes
7 models
Quantizations
7 models

Dataset used to train tomasonjo/text2cypher-demo-16bit

Spaces using tomasonjo/text2cypher-demo-16bit 4

Collection including tomasonjo/text2cypher-demo-16bit