This was a hybrid event with in-person attendance in Levine 307 and virtual attendance…
With rapid evolution of sensing, communication, and computation, integrating learning and control presents significant Embodied AI opportunities. However, current decision-making frameworks lack comprehensive understanding of the tridirectional relationship among communication, learning and control, posing challenges for multi-agent systems in complex environments. In the first part of the talk, we focus on learning and control with communication capabilities. We design an uncertainty quantification method for collaborative perception in connected autonomous vehicles (CAVs). Our findings demonstrate that communication among multiple agents can enhance object detection accuracy and reduce uncertainty. Building upon this, we develop a safe and scalable deep multi-agent reinforcement learning (MARL) framework that leverages shared information among agents to improve system safety and efficiency. We validate the benefits of communication in MARL, particularly in the context of CAVs in challenging mixed traffic scenarios. We incentivize agents to communicate and coordinate with a novel reward reallocation scheme based on Shapley value for MARL. Additionally, we present our theoretical analysis of robust MARL methods under state uncertainties, such as uncertainty quantification in the perception modules or worst-case adversarial state perturbations. In the second part of the talk, we briefly outline our research contributions on robust MARL and data-driven robust optimization for sustainable mobility. We also highlight our research results concerning CPS security. Through our findings, we aim to advance Embodied AI and CPS for safety, efficiency, and resiliency in dynamic environments.