Skip to main navigation Skip to search Skip to main content

AgentsCoMerge: Large Language Model Empowered Collaborative Decision Making for Ramp Merging

Research output: Journal PublicationsJournal Article (refereed)peer-review

Abstract

Ramp merging is one of the bottlenecks in traffic systems, which commonly cause traffic congestion, accidents, and severe carbon emissions. In order to address this essential issue and enhance the safety and efficiency of connected and autonomous vehicles (CAVs) at multi-lane merging zones, we propose a novel collaborative decision-making framework, named AgentsCoMerge, to leverage large language models (LLMs). Specifically, we first design a scene observation and understanding module to allow an agent to capture the traffic environment. Then we propose a hierarchical planning module to enable the agent to make decisions and plan trajectories based on the observation and the agent's own state. In addition, in order to facilitate collaboration among multiple agents, we introduce a communication module to enable the surrounding agents to exchange necessary information and coordinate their actions. Finally, we develop a reinforcement reflection guided training paradigm to further enhance the decision-making capability of the framework. Extensive experiments are conducted to evaluate the performance of our proposed method, demonstrating its superior efficiency and effectiveness for multi-agent collaborative decision-making under various ramp merging scenarios.
Original languageEnglish
Pages (from-to)9791-9805
Number of pages15
JournalIEEE Transactions on Mobile Computing
Volume24
Issue number10
Early online date24 Apr 2025
DOIs
Publication statusPublished - Oct 2025

Bibliographical note

Publisher Copyright:
© 2002-2012 IEEE.

Funding

This work was supported in part by the JC STEM Lab of Smart City funded by the Hong Kong Jockey Club Charities Trust under Grant 2023-0108, in part by the Hong Kong SAR Government under the Global STEM Professorship and Research Talent Hub. The work of Senkang Hu was supported in part by the Hong Kong Innovation and Technology Commission under InnoHK Project CIMDA. The work of Yiqin Deng was supported in part by the National Natural Science Foundation of China under Grant 62301300. The work of Xianhao Chen was supported by the Research Grants Council of Hong Kong under Grant 27213824 and Grant CRS HKU702/24. Received 17 August 2024; revised 16 April 2025; accepted 18 April 2025. Date of publication 24 April 2025; date of current version 3 September 2025. This work was supported in part by the JC STEM Lab of Smart City funded by the Hong Kong Jockey Club Charities Trust under Grant 2023-0108, in part by the Hong Kong SAR Government under the Global STEM Professorship and Research Talent Hub. The work of Senkang Hu was supported in part by the Hong Kong Innovation and Technology Commission under InnoHK Project CIMDA. The work of Yiqin Deng was supported in part by the National Natural Science Foundation of China under Grant 62301300. The work of Xianhao Chen was supported by the Research Grants Council of Hong Kong under Grant 27213824 and Grant CRS HKU702/24. Recommended for acceptance by S. Wang. (Corresponding author: Yiqin Deng.) Senkang Hu, Zhengru Fang, Zihan Fang, Yiqin Deng, and Yuguang Fang are with the Hong Kong JC STEM Lab of Smart City and the Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong (e-mail: [email protected]; [email protected]; zihanfang3-c @my.cityu.edu.hk; [email protected]; [email protected]).

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 9 - Industry, Innovation, and Infrastructure
    SDG 9 Industry, Innovation, and Infrastructure

Keywords

  • Collaborative Decision Making
  • Connected and Autonomous Vehicle (CAV)
  • Large Language Model (LLM)
  • Multi-Lane Merging

Fingerprint

Dive into the research topics of 'AgentsCoMerge: Large Language Model Empowered Collaborative Decision Making for Ramp Merging'. Together they form a unique fingerprint.

Cite this