Table of Links
-
Method
-
Experiments
-
Performance Analysis
Supplementary Material
- Details of KITTI360Pose Dataset
- More Experiments on the Instance Query Extractor
- Text-Cell Embedding Space Analysis
- More Visualization Results
- Point Cloud Robustness Analysis
Anonymous Authors
- Details of KITTI360Pose Dataset
- More Experiments on the Instance Query Extractor
- Text-Cell Embedding Space Analysis
- More Visualization Results
- Point Cloud Robustness Analysis
5.2 Qualitative Analysis
In addition to the quantitative metrics, we also offer a qualitative analysis comparing the top-1/2/3 retrieved cells by Text2Loc [42] and IFRP-T2P, as depicted in Fig. 6. In the first column, the result indicates that both models can retrieve cells with the described instances. However, there are notable differences in their accuracy with respect to the spatial relation descriptions provided. Specifically, for the “beige parking” instance, which is described as being located to the west of the cell, the retrieval result of Text2Loc inaccurately places it to the e ast of the cell centers. Conversely, IFRP-T2P correctly locates this instance to the east of the center, aligning with the given description. In the second column, the text hints describe that the pose is on-top of a “dark-green vegetation” and is north of a “dark-green parking”. For Text2Loc, the parking is found to the north of the cell center in the top-1/2 retrieved cells, and the vegetation is located at the margin area of the top-1/2/3 retrieved cells, discrepant from the text description. For IFRP-T2P, however, the parking appears on the south of the cell center in the top-1/2 retrieved cells, and the vegetation appears on the center of the top-1/2/3 retrieved cells, which matches with the text
description. Notably, in both cases, only the third retrieved cell by IFRP-T2P exceeds the error threshold. This evidence solidifies the superior capacity of IFRP-T2P to interpret and utilize relative position information in comparison to Text2Loc. More case studies of our IFRP-T2P are provided in the supplement material.
5.3 Text Embedding Analysis
Recent years have seen the emergence of large language models (encoders) like BERT [14], RoBERTa [24], T5 [33], and the CLIP [31] text encoder, each is trained with varied tasks and datasets. Text2Loc highlights that a pre-trained T5 model significantly enhances text and point cloud feature alignment. Yet, the potential of other models, such as RoBERTa and the CLIP text encoder, known for their excellence in visual grounding tasks, is not explored in their study. Thus, we conduct a comparative analysis of T5-small, RoBERTa-base, and the CLIP text encoder within our model framework. The result in Table 6 indicates that the T5-small (61M) achieves 0.24/0.46/0.57 at the top-1/3/5 recall metrics, incrementally outperforming RoBERTabase (125M) and CLIP text (123M) with fewer parameters.
Authors:
(1) Lichao Wang, FNii, CUHKSZ ([email protected]);
(2) Zhihao Yuan, FNii and SSE, CUHKSZ ([email protected]);
(3) Jinke Ren, FNii and SSE, CUHKSZ ([email protected]);
(4) Shuguang Cui, SSE and FNii, CUHKSZ ([email protected]);
(5) Zhen Li, a Corresponding Author from SSE and FNii, CUHKSZ ([email protected]).
This paper is