Abstract
This paper introduces a sensing-assisted communication method, which relies on the extraction of multi-modal features. Multi-modal data, e.g. vision, radar, lidar, and position are employed as the input data of the proposed beamforming method. The recognition and beamforming accuracy are therefore improved. Initially, the 3D-Conv model is utilized to extract features from the encoded multimodal data. Subsequently, the generative pre-trained transformer (GPT) is employed to grasp correlations across diverse models and fuse their latent features. These fusion features are used to facilitate beam prediction, thereby approximating the optimal beam index for real-world data. Experimental results based on real-world data validate the effectiveness of our approach, achieving an accuracy of 85%, surpassing traditional single-modal schemes by over 25%.
Original language | English |
---|---|
Title of host publication | ISACom 2023: Proceedings of the 2023 3rd ACM MobiCom Workshop on Integrated Sensing and Communication Systems |
Publisher | Association for Computing Machinery, Inc |
Pages | 19-24 |
Number of pages | 6 |
ISBN (Electronic) | 9798400703645 |
DOIs | |
Publication status | Published - Oct 2023 |
Externally published | Yes |
Event | 3rd ACM MobiCom Workshop on Integrated Sensing and Communication Systems, ISACom 2023 - Madrid, Spain Duration: 6 Oct 2023 → 6 Oct 2023 |
Conference
Conference | 3rd ACM MobiCom Workshop on Integrated Sensing and Communication Systems, ISACom 2023 |
---|---|
Country/Territory | Spain |
City | Madrid |
Period | 6/10/23 → 6/10/23 |
Bibliographical note
Publisher Copyright:© 2023 ACM.
Keywords
- Beamforming
- Deep Learning
- IOV
- Multi-Modal