paint-brush
Uni-OVSeg: Weakly-Supervised Open-Vocabulary Segmentation with Cutting-Edge Performanceby@segmentation

Uni-OVSeg: Weakly-Supervised Open-Vocabulary Segmentation with Cutting-Edge Performance

by SegmentationNovember 12th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Uni-OVSeg offers a powerful solution for open-vocabulary segmentation, outperforming both weakly-supervised and fully-supervised methods. By reducing reliance on complex image-mask-text triplets and refining text descriptions, it achieves remarkable improvements in segmentation quality, especially on the challenging PASCAL Context-459 dataset.
featured image - Uni-OVSeg: Weakly-Supervised Open-Vocabulary Segmentation with Cutting-Edge Performance
Segmentation HackerNoon profile picture

Authors:

(1) Zhaoqing Wang, The University of Sydney and AI2Robotics;

(2) Xiaobo Xia, The University of Sydney;

(3) Ziye Chen, The University of Melbourne;

(4) Xiao He, AI2Robotics;

(5) Yandong Guo, AI2Robotics;

(6) Mingming Gong, The University of Melbourne and Mohamed bin Zayed University of Artificial Intelligence;

(7) Tongliang Liu, The University of Sydney.

Abstract and 1. Introduction

2. Related works

3. Method and 3.1. Problem definition

3.2. Baseline and 3.3. Uni-OVSeg framework

4. Experiments

4.1. Implementation details

4.2. Main results

4.3. Ablation study

5. Conclusion

6. Broader impacts and References


A. Framework details

B. Promptable segmentation

C. Visualisation

5. Conclusion

In conclusion, this paper proposes an innovative framework for weakly-supervised open-vocabulary segmentation, named Uni-OVSeg. Using independent image-text and image-mask pairs, Uni-OVSeg effectively reduces the dependency on labour-intensive image-mask-text triplets, meanwhile achieving impressive segmentation performance in open-vocabulary settings. Using the LVLM to refine text descriptions and multi-scale ensemble to enhance the quality of region embeddings, we alleviate the noise in masktext correspondences, achieving substantial performance improvements. Notably, Uni-OVSeg significantly outper


Table 4. Mask classification performance. we first pool the region features based on the provided ground truth masks. Thesepooled features are then projected into the CLIP embedding space, where they are classified using text embeddings. We report the Top-1 accuracy (%) and time (sec. / sample).


forms previous state-of-the-art weakly-supervised methods and even surpasses the cutting-edge fully-supervised method on the Challenging PASCAL Context-459 dataset. This impressive advancement demonstrates the superiority of our proposed framework and paves the way for further research.


This paper is available on arxiv under CC BY 4.0 DEED license.