You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card for HAH 2024 v0.1

This modelcard aims to be a base template for new models. It has been generated using this raw template.

Model Details

Model Description

HAH 2024 v0.11 aim is to ASSESS how an advanced language model fine-tuned for generating insights from diabetes-related healthcare data will perform. HAH 2024 v0.1 is intended to for research purposes only.

  • Developed by: Dr M As'ad
  • Funded by: Self funded
  • Model type: Transformer-based language model
  • Language(s) (NLP): English
  • License: Apache-2.0
  • Finetuned from model [optional]: Mistral 7b Instruct v0.2

Uses

Direct Use

HAH 2024 v0.11 is designed to assess the performance for direct use in chat interface on diabetes domain.

Downstream Use [optional]

The model can also be fine-tuned for specialized tasks sch a subtypes or subgroups in diabetes field.

Out-of-Scope Use

This model is not recommended for non-English text or contexts outside of healthcare, IT is research project not for any deployments to be used in real chat interface.

Bias, Risks, and Limitations

The model may inherently carry biases from the training data related to diabetes literature, potentially reflecting the geographic and demographic focus of the sources.

Recommendations

Users should verify the model-generated information with current medical guidelines and consider a manual review for sensitive applications.

How to Get Started with the Model

Use the code below to get started with the model:

  from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
  
  # Assuming the model and tokenizer are loaded with 'username/HAH_2024_v0.1'
  model = AutoModelForCausalLM.from_pretrained("drmasad/HAH_2024_v0.11")
  tokenizer = AutoTokenizer.from_pretrained("drmasad/HAH_2024_v0.11")
  
  # Setting up the instruction and the user prompt
  instructions = "you are an expert endocrinologist. Answer the query in accurate informative language any patient will understand."
  user_prompt = "what is diabetic retinopathy?"
  
  # Using the pipeline for text-generation
  pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200)
  
  # Formatting the input with special tokens [INST] and [/INST] for instructions
  result = pipe(f"<s>[INST] {instructions} [/INST] {user_prompt}</s>")
  
  # Extracting generated text and post-processing
  generated_text = result[0]['generated_text']
  
  # Split the generated text to get the text after the last occurrence of </s>
  answer = generated_text.split("</s>")[-1].strip()
  
  # Print the answer
  print(answer)
Downloads last month
0
Safetensors
Model size
7.24B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using drmasad/HAH-2024-v0.11 1

Evaluation results

  • Placeholder Metric for Development on Custom Dataset (3000 review articles on diabetes)
    self-reported
    0.000