paint-brush
Researchers Develop Groundbreaking Method to Teach AI to 'Understand' Images Like Never Beforeby@autoencoder
New Story

Researchers Develop Groundbreaking Method to Teach AI to 'Understand' Images Like Never Before

tldt arrow

Too Long; Didn't Read

Researchers at the Mohamed bin Zayed University developed an AI model that can create text-based conversations tied to specific objects or regions in an image.
featured image - Researchers Develop Groundbreaking Method to Teach AI to 'Understand' Images Like Never Before
Auto Encoder: How to Ignore the Signal Noise HackerNoon profile picture

Authors:

(1) Hanoona Rasheed, Mohamed bin Zayed University of AI and equally contributing first authors;

(2) Muhammad Maaz, Mohamed bin Zayed University of AI and equally contributing first authors;

(3) Sahal Shaji, Mohamed bin Zayed University of AI;

(4) Abdelrahman Shaker, Mohamed bin Zayed University of AI;

(5) Salman Khan, Mohamed bin Zayed University of AI and Australian National University;

(6) Hisham Cholakkal, Mohamed bin Zayed University of AI;

(7) Rao M. Anwer, Mohamed bin Zayed University of AI and Aalto University;

(8) Eric Xing, Mohamed bin Zayed University of AI and Carnegie Mellon University;

(9) Ming-Hsuan Yang, University of California - Merced and Google Research;

(10) Fahad S. Khan, Mohamed bin Zayed University of AI and Linköping University.

Editor's Note: This is Part 4 of 10 of a study detailing the development of an AI model that is designed to describe images to users. Read the rest below.


Supplementary Material (Part 1)


Supplementary Material (Part 2)

4. Data Annotation Pipeline

We introduce our automated annotation pipeline used to create the Grounding-anything Dataset (GranD). GranD is a comprehensive, multi-purpose image-text dataset offering a range of contextual information, from fine-grained to high-level details. It aims to overcome challenges in image understanding and dense pixel-level grounding, thereby expanding the capabilities of visual instruction tuning in LMMs.


The pipeline contains four distinct levels (see Fig. 4). i) Level-1 focuses on object localization and provides semantic labels, segmentation masks, attributes, and depth information. ii) Level-2 defines relationships between detected objects. iii) Level-3 organizes information from the first two levels into a hierarchical scene graph, used to generate dense captions using LLM with in-context examples. iv) Level-4 offers enriched contextual information for a deeper understanding of the scene, going beyond what’s observed (e.g., historical information of a landmark). Please refer to Appendix A.4 for pipeline implementation details.

4.1. Object Localization and Attributes (Level-1)

In level-1, the focus is on detailed object identification within images. First, object-bounding boxes are identified using multiple SoTA object detection models. Classagnostic NMS is applied to each model to filter out false positives. After this step, bounding boxes from different models are compared using IoU, with a bounding box retained as an object only if detected by at least two other detection models. We also generate attributes for each filtered object using region-based vision-language models and incorporate depth information to contextualize each object’s relative position within the scene.

4.2. Relationships and Landmarks (Level-2)

In level-2, multiple short textual descriptions of the overall scene are generated. Phrases extracted from these descriptions are grounded to specific objects in level-1 to form relationships. These relationships articulate connections between multiple objects or define an object’s role within the scene. Further, each scene is assigned a landmark category that includes a primary and a more specific sub-category (see Tab. 7 in Appendix 7).


Figure 4. Automatic Annotation Pipeline of the Grounding-anything Dataset (GranD). Comprising four levels, this pipeline plays a pivotal role in generating GranD’s 7.5M unique concepts grounded in 810M regions. level-1 details objects and attributes, level-2 includes short captions and relational markers, level-3 builds a scene graph, hierarchically organizing information from earlier levels to facilitate LLM for grounded dense captions, level-4 provides additional historical and societal context for a richer visual understanding.

4.3. Scene Graph and Dense Captioning (Level-3)

In level-3, object attributes and labels from level-1 are combined with the relationships and phrases obtained from level-2 to form a hierarchical scene graph. This structured data serves as a query for LLM to generate dense image captions. To provide additional context, depth values and bounding box coordinates are used to assign each object to specific spatial layers within the scene, such as immediate foreground, foreground, midground, or background. Additionally, short scene-level captions are incorporated into the scene graph to enhance LLMs’ contextual understanding.


Dense Captioning Verification: To enhance the fidelity of the LLM-generated dense captions, we implement an automatic verification pipeline using chain-of-thoughts prompting. This pipeline produces a checklist of objects derived from the generated dense caption expected to be present in the image. The associated caption is flagged as inaccurate if any object specified in the checklist is absent from the scene graph. Such captions are then regenerated, incorporating feedback from the initial assessment.

4.4. Extra Contextual Insights (Level-4)

Level-4 builds on the scene graph from level-3 to obtain a more detailed visual understanding. we query LLM to extract extended contextual insights beyond basic object identification and relationships, including details about the landmarks, historical context, guidelines for interacting with the scene, and even predictive elements about future events. To facilitate this, we prompt LLM with in-context examples.


Table 2. GranD versus existing datasets. GranD uniquely provides three † grounded captions per image with segmentation masks for every region. AS-1B is shaded to denote its concurrent, non-public status at the time of this publication.


Utilizing our automated annotation pipeline, we annotate a corpus of 11M SAM images [18], which are inherently diverse, high-resolution, and privacy-compliant. The resulting dataset comprises 810M regions, each associated with a segmentation mask, and includes 7.5M unique concepts. Further, the dataset features 84M referring expressions, 22M grounded short captions, and 11M densely grounded captions. To our knowledge, this is the first dataset of this scale generated entirely through an automated annotation pipeline (see Tab. 2 for details and Fig. 15 for dataset sample visualizations).



Table 3. GLaMM Performance on GCG Task: Metrics include METEOR (M), CIDEr (C), AP50, mIoU, and Mask Recall. LISA* indicates LISA adapted for GCG. GLaMM† denotes training excluding 1000 human annotated images. GLaMM shows better performance.


Table 4. Qualitative Assessment of GLaMM in ReferringExpression Segmentation: Performance across refCOCO, refCOCO+, and refCOCOg in generating accurate segmentation masks based on text-based referring expressions surpasses that of closely related work, including LISA which is specifically designed for this task.


We extend open-source datasets—namely Flickr30K [37], RefCOCOg [16], and PSG [49] by generating compatible GCG annotations. For RefCOCOg, we use the dataset’s referring expressions and their connected masks. These expressions offer concise descriptions of distinct objects in the image. With the aid of GPT-4, we seamlessly blend these referring expressions with contextual information from COCO captions, crafting detailed yet accurate grounded captions while preserving the original referring expressions. This ensures zero error in matching phrases with their corresponding segmentation masks. This technique yields approximately 24K GCG samples. For PSG, we leverage the dataset’s triplet structures, which describe relations between two objects in a scene. These triplets are integrated with COCO captions using GPT-4, resulting in densely annotated captions that can be mapped to segmentation masks. This gives us around 31K additional GCG samples. For Flickr-30K, we use the 158K Flickr captions and their referring expressions alongside associated bounding boxes. These boxes are then accurately segmented using HQ-SAM [17].


In addition, we contribute a minor, high-quality manual annotation set to benchmark the GCG task. Using GranD’s automatic annotations as a base, annotators refine referring expressions to match SAM GT masks, yielding around 1000 focused samples for training and 1000 for evaluation (refer to Appendix D and Fig. 14 for designed prompts and dataset visualizations).


This paper is available on arxiv under CC BY 4.0 DEED license.