File size: 7,921 Bytes
73cd5ff
 
 
98196c8
 
0607c02
73cd5ff
0607c02
 
 
 
 
 
 
98196c8
 
99566ca
 
 
98196c8
99566ca
 
98196c8
 
 
99566ca
98196c8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ca97b51
98196c8
 
 
 
 
 
 
 
 
 
 
 
ca26240
98196c8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
---
language:
- en
- fr
license: apache-2.0
library_name: Tevatron
tags:
- vidore
datasets:
- Tevatron/docmatix-ir
- HuggingFaceM4/Docmatix
- Tevatron/msmarco-passage-aug
- vidore/colpali_train_set
- Tevatron/wiki-ss-nq
base_model:
- Qwen/Qwen2-VL-2B-Instruct
---


# DSE-QWen2-2b-MRL-V1


DSE-QWen2-2b-MRL-V1 is a bi-encoder model designed to encode document screenshots into dense vectors for document retrieval. 
The Document Screenshot Embedding ([DSE](https://arxiv.org/abs/2406.11251)) approach captures documents in their original visual format, preserving all information such as text, images, and layout, thus avoiding tedious parsing and potential information loss.
DSE aims to provide a generalizable embedding model for Text, PDF documents, Webpage, Slides retrieval.

For example, DSE-QWen2-2b-MRL-V1 achieves **85.8** nDCG@5 on [ViDoRE](https://huggingface.co/spaces/vidore/vidore-leaderboard) leaderboard.



## Note:
The following steps need to be done before running the code:
1. clone latest transformers, `git clone https://github.com/huggingface/transformers.git`
2. Fix a bug in `transformers/models/qwen2_vl/modeling_qwen2_vl.py` around line 1774
```
position_ids = position_ids.unsqueeze(0).expand(3, -1, -1)
# make sure the following three line are inside the 'else' statement
if cache_position[0] != 0:
    pixel_values = None
    pixel_values_videos = None
```
3. Install latest transformers from source `pip install -e .`
4. `pip install qwen-vl-utils`


## How to Use the Model

To support better effectiveness--efficiency trade-off, this checkpoint is trained to support:

1. Flexible representation dimension.
2. Flexible input image size.


### Load the Model and Processor

```python
import torch
from transformers import AutoProcessor, Qwen2VLForConditionalGeneration
from qwen_vl_utils import process_vision_info

min_pixels = 1*28*28
max_pixels = 2560*28*28

processor = AutoProcessor.from_pretrained("MrLight/dse-qwen2-2b-mrl-v1", min_pixels=min_pixels, max_pixels=max_pixels)
model = Qwen2VLForConditionalGeneration.from_pretrained('MrLight/dse-qwen2-2b-mrl-v1', attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16).to('cuda:0').eval()
processor.tokenizer.padding_side = "left"
model.padding_side = "left"

def get_embedding(last_hidden_state: torch.Tensor, dimension: int) -> torch.Tensor:
    reps = last_hidden_state[:, -1]
    reps = torch.nn.functional.normalize(reps[:, :dimension], p=2, dim=-1)
    return reps
```

### Encode Text Query

```python
from PIL import Image
queries = ["Where can we see Llama?", "What is LLaMA model?"]
query_messages = []
for query in queries:
    message = [
        {
            'role': 'user',
            'content': [
                {'type': 'image', 'image': Image.new('RGB', (28, 28)), 'resized_height':1 , 'resized_width':1}, # need a dummy image here for an easier process.
                {'type': 'text', 'text': f'Query: {query}'},
            ]
        }
    ]
    query_messages.append(message)
query_texts = [
    processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) + "<|endoftext|>"
    for msg in query_messages
]
query_image_inputs, query_video_inputs = process_vision_info(query_messages)
query_inputs = processor(text=query_texts, images=query_image_inputs, videos=query_video_inputs, padding='longest', return_tensors='pt').to('cuda:0')
query_inputs = model.prepare_inputs_for_generation(**query_inputs, use_cache=False)
with torch.no_grad():
  output = model(**query_inputs, return_dict=True, output_hidden_states=True)
query_embeddings = get_embedding(output.hidden_states[-1], 1536) # adjust dimensionality for efficiency trade-off, e.g. 512
```

### Encode Document Screenshot

```python
import requests
from io import BytesIO

# URLs of the images
url1 = "https://huggingface.co/Tevatron/dse-phi3-docmatix-v2/resolve/main/animal-llama.png"
url2 = "https://huggingface.co/Tevatron/dse-phi3-docmatix-v2/resolve/main/meta-llama.png"

# Download and open images
response1 = requests.get(url1)
response2 = requests.get(url2)

doc_image1 = Image.open(BytesIO(response1.content))
doc_image2 = Image.open(BytesIO(response2.content))

doc_images = [doc_image1, doc_image2]
doc_messages = []
for doc in doc_images:
    message = [
        {
            'role': 'user',
            'content': [
                {'type': 'image', 'image': doc}, #'resized_height':680 , 'resized_width':680} # adjust the image size for efficiency trade-off
                {'type': 'text', 'text': 'What is shown in this image?'}
            ]
        }
    ]
    doc_messages.append(message)
doc_texts = [
    processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) + "<|endoftext|>"
    for msg in doc_messages
]
doc_image_inputs, doc_video_inputs = process_vision_info(doc_messages)
doc_inputs = processor(text=doc_texts, images=doc_image_inputs, videos=doc_video_inputs, padding='longest', return_tensors='pt').to('cuda:0')
doc_inputs = model.prepare_inputs_for_generation(**doc_inputs, use_cache=False)
output = model(**doc_inputs, return_dict=True, output_hidden_states=True)
with torch.no_grad():
    output = model(**passage_inputs, return_dict=True, output_hidden_states=True)
doc_embeddings = get_embedding(output.hidden_states[-1], 1536) # adjust dimensionality for efficiency trade-off e.g. 512

```

### Compute Similarity

```python
from torch.nn.functional import cosine_similarity
num_queries = query_embeddings.size(0)
num_passages = doc_embeddings.size(0)

for i in range(num_queries):
    query_embedding = query_embeddings[i].unsqueeze(0)
    similarities = cosine_similarity(query_embedding, doc_embeddings)
    print(f"Similarities for Query {i+1}: {similarities.cpu().float().numpy()}")
```

### Encode Document Text
This DSE checkpoint is warm-up with `Tevatron/msmarco-passage-aug`, thus the model can also effectively encode document as text input.
```python
passage_prompts = [
  "The llama (/ˈlɑːmə/; Spanish pronunciation: [ˈʎama] or [ˈʝama]) (Lama glama) is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the pre-Columbian era.",
  "Llama (acronym for Large Language Model Meta AI, and formerly stylized as LLaMA) is a family of autoregressive large language models (LLMs) released by Meta AI starting in February 2023.[2][3] The latest version is Llama 3.1, released in July 2024.[4]"
]
doc_messages = []
for doc in doc_images:
    message = [
        {
            'role': 'user',
            'content': [
                {'type': 'image', 'image': Image.new('RGB', (28, 28)), 'resized_height':1 , 'resized_width':1}, # need a dummy image here for an easier process.
                {'type': 'text', 'text': f'Document: {doc}'}
            ]
        }
    ]
    doc_messages.append(message)
doc_texts = [
    processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) + "<|endoftext|>"
    for msg in doc_messages
]
doc_image_inputs, doc_video_inputs = process_vision_info(doc_messages)
doc_inputs = processor(text=doc_texts, images=doc_image_inputs, videos=doc_video_inputs, padding='longest', return_tensors='pt').to('cuda:0')
doc_inputs = model.prepare_inputs_for_generation(**doc_inputs, use_cache=False)
output = model(**doc_inputs, return_dict=True, output_hidden_states=True)
with torch.no_grad():
    output = model(**passage_inputs, return_dict=True, output_hidden_states=True)
doc_embeddings = get_embedding(output.hidden_states[-1], 1536) # adjust dimensionality for efficiency trade-off e.g. 512

for i in range(num_queries):
    query_embedding = query_embeddings[i].unsqueeze(0)
    similarities = cosine_similarity(query_embedding, doc_embeddings)
    print(f"Similarities for Query {i+1}: {similarities.cpu().float().numpy()}")
```

### Citation
If you find this checkpoint is helpful, please consider citing QWen2, Docmatix, ViDoRe, and our DSE work.