peiyuanl commited on
Commit
54ce3a8
1 Parent(s): b9e9e96

initial upload

Browse files
MM-Mind2Web-tilde_test_snapshot_20dist-00001-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d18f147d29bb9248cd12ead814188d83b30315bed9fcf68cd3dbeee350353d70
3
+ size 547838671
MM-Mind2Web-tilde_test_snapshot_20dist-00001-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ac1db2d1647381cde65c704f8b384aa34c23573cffe537577dfbbb4f4e12cc1
3
+ size 749042780
MM-Mind2Web-tilde_test_snapshot_20dist-00002-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1107472f718accf83853cdf52f1b0f505804c5db4732f922549bdfdbcc76ff86
3
+ size 587216472
MM-Mind2Web-tilde_test_snapshot_20dist-00003-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14e5efcc18b8020f62a0180736b8db0e1d7ad331fd2f15dd862af6d2b36035bb
3
+ size 599799394
MM-Mind2Web-tilde_test_snapshot_20dist-00004-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd38aad6464040447a6a421778e82a70abc95f4d3c479437b4339846f2978ae8
3
+ size 636066853
MM-Mind2Web-tilde_test_snapshot_20dist-00005-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:370f434f39a55320922d8a011e3a32773acb1d7dae21570eb507150da236299f
3
+ size 563885046
MM-Mind2Web-tilde_test_snapshot_20dist-00006-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfb857974c672e107ec05de3c8c2162bf8fd191dd42cc8556cbe610e9d4e827a
3
+ size 556931729
MM-Mind2Web-tilde_test_snapshot_20dist-00007-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7cf877bc176140135219cf1f411a8316d0501c3bdb695dfee70f32e9f4f5aedc
3
+ size 517099672
MM-Mind2Web-tilde_test_snapshot_20dist-00008-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b73c4888073648bd02f0be5b9752e75459d9d85f4ae1c94ecac76f1b3fb3218f
3
+ size 581492126
MM-Mind2Web-tilde_test_snapshot_20dist-00009-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20f6bfa62a012ec746c8d20339c7041412a77e619d610d69a47de9306cfe2eb7
3
+ size 558162109
MM-Mind2Web-tilde_test_snapshot_20dist-00010-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b7833f41d3b98a035ced4a8f7adfa7c31464c79f14622735ab8617f6810cc9c
3
+ size 630883761
MM-Mind2Web-tilde_test_snapshot_20dist-00011-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:931a7ca04a1e2e2b4af52572c09ac860093f39f2098be3a7b559d014d3bb0ea5
3
+ size 547692746
README.md ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: openrail
3
+ task_categories:
4
+ - text-generation
5
+ - multiple-choice
6
+ language:
7
+ - en
8
+ tags:
9
+ - web agent
10
+ - agent
11
+ pretty_name: MultiModal-Mind2Web~ (test split, snapshot with seed 42, 20 distractors)
12
+ size_categories:
13
+ - 1K<n<10K
14
+ ---
15
+ ## *MultiModal-Mind2Web~* (MM-Mind2Web~)
16
+
17
+ <p align="center">
18
+ <img src="https://r1-assets.transactional.pub/media/Rabbit_Icon_W_1x1.jpg" width="125">
19
+ </p>
20
+ <p align="center">
21
+ <span>rabbit inc.</span><br/>
22
+ <span>[Leaderboard & Blogpost to be released]</span><br/>
23
+ <span>Configuration: test split, snapshot with seed 42, 20 distractors</span>
24
+ </p>
25
+
26
+ [Multimodal-Mind2Web](<https://huggingface.co/datasets/osunlp/Multimodal-Mind2Web>) is a dataset proposed by [Boyuan et al.](<https://arxiv.org/abs/2401.01614>). It's designed for the development and evaluation of generalist web agents and includes various action trajectories of humans on real websites.
27
+
28
+ We've simplified the raw dump from both Multimodal-Mind2Web and Mind2Web into sequences of observation-action pairs. We've also adapted prompting and DOM-encoding techniques from [SeeAct](<https://arxiv.org/abs/2401.01614>). This allows us to reformulate the problem of action generation, localization (terminology used in the large action model, or LAM) / element grounding, and reasoning of action (also terminology used in LAM) / action grounding into a straightforward text-generation and multiple-choice problem. This simplification makes the dataset viable as a generic evaluation for a vision language model (VLM). The dataset includes prompts (`prompt_0`, `prompt_1`) in a chat format, which makes it easier to use a VLM for evaluation and lowers the implementation barrier common in evaluation frameworks of computer-using agents.
29
+
30
+ We're currently evaluating state-of-the-art models on the dataset and are gradually providing access to a more comprehensive Gym-compatible evaluation environment. This environment will allow for offline and online evaluations of agents, offering more structural and fundamental improvements over existing benchmarks like MultiModal-Mind2Web. We will share our findings and release the full leaderboard in a blog post on <https://engineering.rabbit.tech/> soon.
31
+
32
+
33
+ ### Dataset Structure
34
+
35
+ * `task_id` (str): unique id for each task, equivalent to `annotation_id` in MultiModal-Mind2Web.
36
+ * `split` (str): dataset split, one of (`test_website`, `test_task` and `test_domain`), equivalent to the split in MultiModal-Mind2Web.
37
+ * `step` (int64): the index of the step (starting from zero) this particular action belongs to within the trajectory that it is recorded. Equivalent to `target_action_index` in MultiModal-Mind2Web.
38
+ * `task_description` (str): description of the task representing user intent, equivalent to `confirmed_task` in MultiModal-Mind2Web.
39
+ * `prompt_0` (str): prompt to generate action description. Contains image input.
40
+ * `prompt_1` (str): prompt to perform action and element grounding, used in conjunction with `prompt_0` and outputs of a previous invocation of a VLM.
41
+ * `raw_html` (str): raw html of the page before the action is performed, consistent with the raw Mind2Web dump.
42
+ * `cleaned_html` (str): sanitized html of the page before the action is performed, similar to `cleaned_html` in MultiModal-Mind2Web.
43
+ * `candidates` (sequence[str]): sampled sanitized html representation of candidates of salient DOM elements in this particular snapshot. One element belongs to `pos_candidates` and the rest belong to `neg_candidates` in MultiModal-Mind2Web.
44
+ * `target_elements` (sequence[str]): sanitized html representation of viable DOM elements in the webpage that the action is performed on. All elements can be found in `pos-candidates` in MultiModal-Mind2Web.
45
+ * `target_op` (str): the operation that should be performed, must be one of `CLICK`, `TYPE`, and `SELECT`. Equivalent to `operation.op` in MultiModal-Mind2Web.
46
+ * `target_op_value` (str): the argument supplied to the operation that should be performed. May be empty; equivalent to `operation.value` in MultiModal-Mind2Web.
47
+ * `website` (str): website name, equivalent to `website` in MultiModal-Mind2Web.
48
+ * `domain` (str): website domain, equivalent to `website` in MultiModal-Mind2Web.
49
+ * `subdomain` (str): website subdomain, equivalent to `website` in MultiModal-Mind2Web.
50
+ * `is_valid` (str): whether this row is valid for evaluation. Rows with `is_valid = False` must be excluded when calculating average step-wise performance, or task- and trajectory-level performance. A row that is invalid could either have an empty screenshot, or does not have a positive element in the sanitized html.
51
+
52
+ ### Improvements from MultiModal-Mind2Web
53
+
54
+ 1. For all test splits, `raw_html` is not available in the original Multimodal-Mind2Web dataset uploaded to HuggingFace. From [1](<https://huggingface.co/datasets/osunlp/Multimodal-Mind2Web/viewer/default/test_website>), [2](<https://huggingface.co/datasets/osunlp/Multimodal-Mind2Web/viewer/default/test_task>) and [3](<https://huggingface.co/datasets/osunlp/Multimodal-Mind2Web/viewer/default/test_domain>), the values in the column are the same as those of `cleaned_html`. We re-associated each action with the raw html from the original Mind2Web dump to overcome this challenge.
55
+ 2. For all test splits, 11 rows have no screenshot in the original Multimodal-Mind2Web dataset uploaded to HuggingFace. This will make any agent using screenshots as part of its action generation routine fail, which will affect both step-level and task-level metrics. We have labeled these rows with `is_valid = False` to signal to model evaluators while maintaining the completeness of the action trajectory.
56
+ 3. For all test splits, 761 rows have no ground truth element in `cleaned_html` in the original Multimodal-Mind2Web dataset uploaded to HuggingFace. This will make any agent fail during element grounding, which will affect both step-level and task-level metrics. We have labeled these rows with `is_valid = False` to signal to model evaluators while maintaining the completeness of the action trajectory.
57
+ 4. We have also simplified the sanitized representation of DOM elements, such as shortening `backend_node_id` into `bnid` and preserving more structure in the candidate tree representation. We will explain our implementation in more detail in the blog post, as well as providing a detailed example comparing MultiModal-Mind2Web's representation and ours.
58
+
59
+
60
+ ### Assumptions and Problem Definition
61
+
62
+ A common subroutine of web agents ([MindAct](https://arxiv.org/abs/2306.06070), SeeAct, LAM) is a retriever that identifies salient DOM elements relevant to the action. This localization/element grounding can be reframed as a multiple-choice/re-ranking problem where the VLM must choose an applicable candidate for the action. Since this subroutine is not a universal component of a computer-using agent and is beyond the scope of evaluating a generic VLM's agent-related capabilities, *MultiModal-Mind2Web~* assumes the existence of a strong ranker.
63
+
64
+ Given a distractor parameter k (in this case, 20), we sample k candidates from the negative pool (provided by the heuristic in MultiModal-Mind2Web) and randomly select a ground truth element from the positive pool to construct the scrambled list of candidates available to the VLM. This simulates the existence of a ranker with a nonzero precision at k+1 (P@k+1 > 0). Randomness is controlled through seeding so that the same sets of elements are always selected and appear in the same positions in the scrambled list. All snapshot datasets released by us are seeded with 42.
65
+
66
+ > A snapshot with 10 distractors will have a stronger assumption on the existence of a more powerful retriever with nonzero P@11 compared to a snapshot with 30 distractors (P@31 > 0). This treatment helps MultiModal-Mind2Web to be a very accessible and generic benchmark for VLMs without a complex, stateful setup. It also directly affects the context length required for the VLM and the difficulty of the benchmark in terms of assessing VLM's in-context learning capabilities.
67
+
68
+ Agent evaluations, whether offline or online, are always dynamic. We have internally built a generic environment to enable candidate sampling as well as simulation of various online environments to evaluate agents. The dataset is taken from a particular episode, hence the name of a "snapshot".
69
+
70
+ ### Usage as a generic VLM eval
71
+
72
+ *MultiModal-Mind2Web~* can be used as a generic eval of a VLM to assess various aspects of grounded UI understanding and planning, and could be run in addition to existing generalized benchmarks like [MMMU](https://mmmu-benchmark.github.io/). Below is an example implementation of a baseline `gpt-4o` agent using the dataset over two rounds of action generation and grounding:
73
+
74
+ ```python
75
+ from openai import OpenAI
76
+
77
+ client = OpenAI()
78
+
79
+ def deduce_action(prompt_0, prompt_1):
80
+
81
+ action_prompt = prompt_0
82
+ grounding_prompt = prompt_1
83
+
84
+ resp1 = client.chat.completions.create(
85
+ model="gpt-4o",
86
+ messages=action_prompt,
87
+ max_tokens=500,
88
+ temperature=0,
89
+ )
90
+ response = resp1.choices[0].message.content
91
+
92
+ grounding_prompt = (
93
+ action_prompt
94
+ + [
95
+ {
96
+ "role": "assistant",
97
+ "content": [{"type": "text", "text": f"\n\n{response}"}],
98
+ },
99
+ ]
100
+ + grounding_prompt
101
+ )
102
+
103
+ resp2 = client.chat.completions.create(
104
+ model="gpt-4o",
105
+ messages=grounding_prompt,
106
+ max_tokens=500,
107
+ temperature=0,
108
+ )
109
+
110
+ final_response = resp2.choices[0].message.content
111
+ return final_response
112
+ ```
113
+
114
+ Where `prompt_0` and `prompt_1` correspond to the column values in the files, and `final_response` can be either parsed or evaluated against the target values `target_elements`, `target_op` and `target_op_value` via a VQA model.