Quantitative Evaluation of O3D-SIM: Success Rate on Matterport3D VLN Tasks

Written by instancing | Published 2025/12/16
Tech Story Tags: deep-learning | habitat-simulator | open-set-3d-mapping | semantic-instance-maps | success-rate-metric | matterport3d-dataset | vision-language-navigation | quantitative-analysis

TLDRQuantitatively evaluates O3D-SIM using the Matterport3D dataset and Success Rate metric in the Habitat simulatorvia the TL;DR App

Abstract and 1 Introduction

  1. Related Works

    2.1. Vision-and-Language Navigation

    2.2. Semantic Scene Understanding and Instance Segmentation

    2.3. 3D Scene Reconstruction

  2. Methodology

    3.1. Data Collection

    3.2. Open-set Semantic Information from Images

    3.3. Creating the Open-set 3D Representation

    3.4. Language-Guided Navigation

  3. Experiments

    4.1. Quantitative Evaluation

    4.2. Qualitative Results

  4. Conclusion and Future Work, Disclosure statement, and References

4.1. Quantitative Evaluation

To facilitate the construction of O3D-SIM and its quantitative evaluation, we employ the Matterport3D dataset [36] within the Habitat simulator [37]. Matterport3D, a comprehensive RGB-D dataset, encompasses 10,800 panoramic views derived from 194,400 RGB-D images across 90 large-scale buildings. It offers surface reconstructions, camera poses, and 2D and 3D semantic segmentations — critical components for creating accurate Ground-truth models. Both Matterport3D and Habitat are widely utilized for assessing the navigational abilities of VLN agents in indoor settings, enabling robots to execute navigational tasks dictated by natural language commands in a seamless environment, with performance meticulously documented. To evaluate O3D-SIM, we compiled 5,267 RGB-D frames and their respective pose data from five distinct scenes, applying this dataset across all mapping pipelines included in our assessment. Additionally, we gathered real-world environment data for evaluation purposes, thereby expanding our analysis to encompass six unique scenes.

Baseline: We evaluate the performance of the O3D-SIM against the logical baseline used in our previous work, VLMaps with Connected Components, and also evaluate against our approach SI Maps from [1]. The three methods mentioned for comparison are chosen as they try to achieve things similar to our approach.

Evaluation Metrics: Like prior approaches [2, 38, 39] in VLN literature, we use the gold standard Success Rate metric, also known as Task Completion metric to measure the success ratio for the navigation task. We choose Success Rate on navigation tasks as they directly quantify the overall approach and indirectly quantify the performance of O3D-SIM in detecting the instances because if the instances along the way are not properly detected, the queries are bound to fail. We compute the Success Rate metric through human and automatic evaluations. For automatic evaluation, we define success if the agent reaches within a threshold distance of the ground truth goal. Here, the agent’s orientation concerning the goal(s) doesn’t matter and might show success even when the agent fails. For example, if there are multiple paintings and the

agent is asked to point to a particular painting at the end of a query, the agent may reach with a close distance of the desired painting but end up looking at something undesired. Hence, we also use human evaluation to verify if the agent ends up in a desired position according to the query. Human Verification takes in votes from the three people and decides, based on these votes, the success of a task.

Results: We present the results of the evaluation metric Success Rate in Table 1. In our experimentation, we observe a remarkable improvement in performance compared to the other approaches we have shown in our paper, especially against the baselines from [1]. O3D-SIM performs better than VLMaps with CC due to its ability to identify instances robustly. SI Maps and O3D-SIM perform better than the baselines due to their ability to separate instances. However, O3D-SIM has the edge over SI Maps due to its open set and 3D nature, allowing it to understand the surroundings better.

Authors:

(1) Laksh Nanwani, International Institute of Information Technology, Hyderabad, India; this author contributed equally to this work;

(2) Kumaraditya Gupta, International Institute of Information Technology, Hyderabad, India;

(3) Aditya Mathur, International Institute of Information Technology, Hyderabad, India; this author contributed equally to this work;

(4) Swayam Agrawal, International Institute of Information Technology, Hyderabad, India;

(5) A.H. Abdul Hafez, Hasan Kalyoncu University, Sahinbey, Gaziantep, Turkey;

(6) K. Madhava Krishna, International Institute of Information Technology, Hyderabad, India.


This paper is available on arxiv under CC by-SA 4.0 Deed (Attribution-Sharealike 4.0 International) license.


Written by instancing | Pioneering instance management, driving innovative solutions for efficient resource utilization, and enabling a more sus
Published by HackerNoon on 2025/12/16