Semantic ranking structure preserving for cross-modal retrieval

Hui LIU, Yong FENG*, Mingliang ZHOU, Baohua QIANG

*Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review

6 Citations (Scopus)

Abstract

Cross-modal retrieval not only needs to eliminate the heterogeneity of modalities, but also needs to constrain the return order of retrieval results. Accordingly, we propose a novel common representation space learning method, called Semantic Ranking Structure Preserving (SRSP) for Cross-modal Retrieval in this paper. First, the dependency relationship between labels is used to minimize the discriminative loss of multi-modal data and mine potential relationships between samples to get richer semantic information in the common space. Second, we constrain the correlation ranking of representations in common space, so as to break the modal gap and promote the multi-modal correlation learning. The comprehensive experimental comparison results show that our algorithm substantially enhances the performance and consistently outperforms very recent algorithms in terms of widely used cross-modal benchmark datasets.

Original languageEnglish
Pages (from-to)1802-1812
Number of pages11
JournalApplied Intelligence
Volume51
Issue number3
Early online date15 Oct 2020
DOIs
Publication statusPublished - Mar 2021
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2020, Springer Science+Business Media, LLC, part of Springer Nature.

Keywords

  • Common space learning
  • Cross-modal retrieval
  • Graph convolutional
  • Semantic structure preserving

Fingerprint

Dive into the research topics of 'Semantic ranking structure preserving for cross-modal retrieval'. Together they form a unique fingerprint.

Cite this