paint-brush
Researchers Rank AI Models Based on How Well They Can Navigate Your Android Screenby@fewshot

Researchers Rank AI Models Based on How Well They Can Navigate Your Android Screen

tldt arrow

Too Long; Didn't Read

Researchers at Microsoft and University of California San Diego have developed an AI model capable of navigating your smartphone screen.
featured image - Researchers Rank AI Models Based on How Well They Can Navigate Your Android Screen
The FewShot Prompting Publication  HackerNoon profile picture

Authors:

(1) An Yan, UC San Diego, [email protected];

(2) Zhengyuan Yang, Microsoft Corporation, [email protected] with equal contributions;

(3) Wanrong Zhu, UC Santa Barbara, [email protected];

(4) Kevin Lin, Microsoft Corporation, [email protected];

(5) Linjie Li, Microsoft Corporation, [email protected];

(6) Jianfeng Wang, Microsoft Corporation, [email protected];

(7) Jianwei Yang, Microsoft Corporation, [email protected];

(8) Yiwu Zhong, University of Wisconsin-Madison, [email protected];

(9) Julian McAuley, UC San Diego, [email protected];

(10) Jianfeng Gao, Microsoft Corporation, [email protected];

(11) Zicheng Liu, Microsoft Corporation, [email protected];

(12) Lijuan Wang, Microsoft Corporation, [email protected].

Editor’s note: This is the part 8 of 13 of a paper evaluating the use of a generative AI to navigate smartphones. You can read the rest of the paper via the table of links below.


5 Android Screen Navigation Experiment

5.1 Experimental Setup

Dataset. We use the AITW dataset (Rawles et al., 2023) for our evaluation on Android screen navigation. AITW is a large-scale benchmark dataset for UI control, which contains natural language instructions, screenshots on different Android systems with different resolutions, and user-annotated actions. It covers diverse multi-step tasks such as various web and application operations, app installation, and tasks with Google apps, with 715K episodes and 30K unique instructions in total. Table 2 shows the basic statistics of the dataset. We follow the split from previous work (Zhan and Zhang, 2023). Following the previous experiment setting (Rawles et al., 2023) that evaluates PaLM 2 on a randomly sampled 288 episodes, we sample 300 episodes from the test split as our test set.


Metrics. Following previous work (Rawles et al., 2023; Zhan and Zhang, 2023), we compute the screen-wise partial action matching score as the main evaluation metric, defined as the number of correct actions divided by the episode length, then this score is averaged over all tested episodes. A predicted action from GPT-4V is considered correct if both the action type and gesture match the gold ones, i.e., user actions. For click actions, it is considered correct if the selected element falls within a 14% screen distance from the gold gestures or occurs within the same detected bounding box with user gestures. For scroll actions, it is considered correct if the selected direction has the same scroll direction (up, down, left, and right) as user gestures. The partial score has been shown to correlate with the task complete score estimated by human evaluations (Rawles et al., 2023) to measure the action success rate of this task.


Baselines. We compare with the following baselines (Rawles et al., 2023; Zhan and Zhang, 2023):


• PaLM-2 ZS (Rawles et al., 2023): Zero-shot performance with PaLM-2 (Anil et al., 2023), by feeding a textual description of the screen and ask it to predict an action among the supported actions in AITW. We adopt a previously proposed LLM-based design for device control (Wang et al., 2023), where the input screen description is converted to HTML syntax.


• PaLM-2 5-shot (Rawles et al., 2023): Five examples of navigation are designed as Chainof-thought prompts. The history of prior actions taken by the agent is also fed into the model input.


• ChatGPT 5-shot (Zhan and Zhang, 2023). The input prompts are of the same format as PaLM2 5-shot. Experiments are conducted via the ChatGPT API.


• Fine-tuned Llama-2 (Zhan and Zhang, 2023): Fine-tuning Llama-2 model (Touvron et al., 2023) with LoRA (Hu et al., 2021), by feeding the model with the user instruction and screen descriptions in HTML syntax (the same that are used for in-context learning LLMs) and predict user actions. The model is fine-tuned with 1% randomly sampled training data to help adapt to this task.


This paper is available on arxiv under CC BY 4.0 DEED license.