Diving into Underwater : Segment Anything Model Guided Underwater Salient Instance Segmentation and A Large-scale Dataset

Shijie LIAN, Ziyi ZHANG, Hua LI, Wenjie LI, Laurence Tianruo YANG, Sam KWONG, Runmin CONG

Research output: Book Chapters | Papers in Conference ProceedingsConference paper (refereed)Referred Conference Paperpeer-review

Abstract

With the breakthrough of large models, Segment Anything Model (SAM) and its extensions have been attempted to apply in diverse tasks of computer vision. Underwater salient instance segmentation is a foundational and vital step for various underwater vision tasks, which often suffer from low segmentation accuracy due to the complex underwater circumstances and the adaptive ability of models. Moreover, the lack of large-scale datasets with pixel-level salient instance annotations has impeded the development of machine learning techniques in this field. To address these issues, we construct the first large-scale underwater salient instance segmentation dataset (USIS10K), which contains 10,632 underwater images with pixel-level annotations in 7 categories from various underwater scenes. Then, we propose an Underwater Salient Instance Segmentation architecture based on Segment Anything Model (USIS-SAM) specifically for the underwater domain. We devise an Underwater Adaptive Visual Transformer (UA-ViT) encoder to incorporate underwater domain visual prompts into the segmentation network. We further design an out-of-the-box underwater Salient Feature Prompter Generator (SFPG) to automatically generate salient prompters instead of explicitly providing foreground points or boxes as prompts in SAM. Comprehensive experimental results show that our USIS-SAM method can achieve superior performance on USIS10K datasets compared to the state-of-the-art methods. Datasets and codes are released on Github.

Original languageEnglish
Title of host publicationProceedings of Machine Learning Research
EditorsRuslan SALAKHUTDINOV, Zico KOLTER, Katherine HELLER, Adrian WELLER, Nuria OLIVER, Jonathan SCARLETT, Felix BERKENKAMP
Pages29545-29559
Number of pages15
Volume235
Publication statusPublished - Jul 2024
Event41st International Conference on Machine Learning, ICML 2024 - Vienna, Austria
Duration: 21 Jul 202427 Jul 2024

Publication series

NameProceedings of Machine Learning Research
Volume235
ISSN (Print)2640-3498

Conference

Conference41st International Conference on Machine Learning, ICML 2024
Country/TerritoryAustria
CityVienna
Period21/07/2427/07/24

Bibliographical note

Publisher Copyright:
Copyright 2024 by the author(s)

Funding

This work was supported in part by the National Key R&D Program of China 2022ZD0118300; in part by the National Natural Science Foundation of China under Grant 62201179; in part by the Innovation Platform for “New Star of South China Sea” of Hainan Province under Grant No. NHXXRCXM202306; in part by the specific research fund of The Innovation Platform for Academicians of Hainan Province under Grant No.YSPTZX202410; in part by the Research Start-up Fund of Hainan University (No. KYQD(ZR)-22015); in part by the Taishan Scholar Project of Shandong Province under Grant tsqn202306079, in part by Xiaomi Young Talents Program, in part by the Hong Kong GRF-RGC General Research Fund under Grant 11209819 and Grant 11203820.

Fingerprint

Dive into the research topics of 'Diving into Underwater : Segment Anything Model Guided Underwater Salient Instance Segmentation and A Large-scale Dataset'. Together they form a unique fingerprint.

Cite this