mjbuehler commited on
Commit
78c9e49
1 Parent(s): c358978

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - multilingual
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - nlp
8
+ - code
9
+ - vision
10
+ inference:
11
+ parameters:
12
+ temperature: 0.3
13
+ widget:
14
+ - messages:
15
+ - role: user
16
+ content: <|image_1|>Can you describe what you see in the image?
17
+ ---
18
+ ## Model Summary
19
+
20
+ We introduce Cephalo, a series of multimodal materials science focused vision large language models (V-LLMs) designed to integrate visual and linguistic data for advanced understanding and interaction.
21
+
22
+ A novel aspect of Cephalo's development is the innovative dataset generation method. The extraction process employs advanced algorithms to accurately detect and separate images and their corresponding textual descriptions from complex PDF documents. It involves extracting images and captions from PDFs to create well-reasoned image-text pairs, utilizing large language models (LLMs) for natural language processing. These image-text pairs are then refined and validated through LLM-based NLP processing, ensuring high-quality and contextually relevant data for training.
23
+
24
+ Cephalo can interpret complex visual scenes and generating contextually accurate language descriptions and answer queries.
25
+
26
+ The model is developed to process diverse inputs, including images and text, facilitating a broad range of applications such as image captioning, visual question answering, and multimodal content generation.
27
+
28
+ The architecture combines a vision encoder model and an autoregressive transformer to process complex natural language understanding.
29
+
30
+ Cephalo provides a robust framework for multimodal interaction and understanding, including the development of complex generative pipelines to create 2D and 3D renderings of material microstructures as input for additive manufacturing methods.
31
+
32
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/FsdGboq7FinNHOJnwYfWu.png)
33
+
34
+ This version of Cephalo, lamm-mit/Cephalo-Phi-3-vision-128k, is based on the Phi-3-Vision-128K-Instruct model. Further details, see: https://huggingface.co/microsoft/Phi-3-vision-128k-instruct.
35
+
36
+ ### Chat Format
37
+
38
+ Given the nature of the training data, the Cephalo-Phi-3-vision-128k model is best suited for a single image input wih prompts using the chat format as follows.
39
+ You can provide the prompt as a single image with a generic template as follow:
40
+ ```markdown
41
+ <|user|>\n<|image_1|>\n{prompt}<|end|>\n<|assistant|>\n
42
+ ```
43
+
44
+ where the model generates the text after `<|assistant|>` . For multi-turn conversations, the prompt should be formatted as follows:
45
+
46
+ ```markdown
47
+ <|user|>\n<|image_1|>\n{prompt_1}<|end|>\n<|assistant|>\n{response_1}<|end|>\n<|user|>\n{prompt_2}<|end|>\n<|assistant|>\n
48
+ ```
49
+
50
+ ### Sample inference code
51
+
52
+ This code snippets show how to get quickly started on a GPU:
53
+
54
+ ```python
55
+ from PIL import Image
56
+ import requests
57
+ from transformers import AutoModelForCausalLM
58
+ from transformers import AutoProcessor
59
+
60
+ model_id = "lamm-mit/Cephalo-Phi-3-vision-128k"
61
+
62
+ model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda", trust_remote_code=True, torch_dtype="auto")
63
+
64
+ processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
65
+
66
+ messages = [
67
+ {"role": "user", "content": "<|image_1|>\nWhat is shown in this image?"},
68
+ ]
69
+
70
+ url = "https://d2r55xnwy6nx47.cloudfront.net/uploads/2018/02/Ants_Lede1300.jpg"
71
+ image = Image.open(requests.get(url, stream=True).raw)
72
+
73
+ prompt = processor.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
74
+
75
+ inputs = processor(prompt, [image], return_tensors="pt").to("cuda:0")
76
+
77
+ generation_args = {
78
+ "max_new_tokens": 500,
79
+ "temperature": 0.0,
80
+ "do_sample": False,
81
+ }
82
+
83
+ generate_ids = model.generate(**inputs, eos_token_id=processor.tokenizer.eos_token_id, **generation_args)
84
+
85
+ # remove input tokens
86
+ generate_ids = generate_ids[:, inputs['input_ids'].shape[1]:]
87
+ response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
88
+
89
+ print(response)
90
+ ```