PLPF-VSLAM : An indoor visual SLAM with adaptive fusion of point-line-plane features

Jinjin YAN, Youbing ZHENG, Jinquan YANG, Lyudmila MIHAYLOVA, Weijie YUAN, Fuqiang GU*

*Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review

4 Citations (Scopus)

Abstract

Simultaneous localization and mapping (SLAM) is required in many areas and especially visual-based SLAM (VSLAM) due to the low cost and strong scene recognition capabilities conventional VSLAM relies primarily on features of scenarios, such as point features, which can make mapping challenging in scenarios with sparse texture. For instance, in environments with limited (low-even non-) textures, such as certain indoors, conventional VSLAM may fail due to a lack of sufficient features. To address this issue, this paper proposes a VSLAM system called visual SLAM that can adaptively fuse point-line-plane features (PLPF-VSLAM). As the name implies, it can adaptively employ different fusion strategies on the PLPF for tracking and mapping. In particular, in rich-textured scenes, it utilizes point features, while in non-/low-textured scenarios, it automatically selects the fusion of point, line, and/or plane features. PLPF-VSLAM is evaluated on two RGB-D benchmarks, namely the TUM data sets and the ICL_NUIM data sets. The results demonstrate the superiority of PLPF-VSLAM compared to other commonly used VSLAM systems. When compared to ORB-SLAM2, PLPFVSLAM achieves an improvement in accuracy of approximately 11.29%. The processing speed of PLPF-VSLAM outperforms PL(P)-VSLAM by approximately 21.57%.
Original languageEnglish
Pages (from-to)50-67
Number of pages18
JournalJournal of Field Robotics
Volume41
Issue number1
Early online date28 Aug 2023
DOIs
Publication statusPublished - Jan 2024
Externally publishedYes

Funding

This work was supported by the Shandong Province Natural Science Foundation Youth Branch (No. ZR2022QD121), Fundamental Research Funds for the Central Universities (No. 3072022FSC0401, 2023CDJXY‐038, 2023CDJXY‐039), National Natural Science Foundation of China (No. 42174050), Venture & Innovation Support Program for Chongqing Overseas Returnees (No. cx2021047), Chongqing Startup Project for Docorate Scholars (No. CSTB2022BSXM‐JSX005), and Open Research Projects of Zhijiang Lab (No. K2022NB0AB07).

Keywords

  • mapping
  • non-/low-textured scenarios
  • tracking
  • visual SLAM

Fingerprint

Dive into the research topics of 'PLPF-VSLAM : An indoor visual SLAM with adaptive fusion of point-line-plane features'. Together they form a unique fingerprint.

Cite this