This was a hybrid event with in-person attendance in Wu and Chen and virtual attendance…
With the recent advances in sensing, actuation, computation, and communication, the deployment of large numbers of robots is becoming a promising avenue to enable or speed up complex tasks in areas such as manufacturing, last-mile delivery, search-and-rescue, or autonomous inspection. My group strives to push the boundaries of multi-agent scalability by understanding and eliciting emergent coordination/cooperation in multi-robot systems as well as in articulated robots (where agents are individual joints). Our work mainly relies on distributed (multi-agent) reinforcement learning, where we focus on endowing agents with novel information and mechanisms that can help them align their decentralized policies towards team-level cooperation. In this talk, I will first summarize my early work in independent learning, before discussing my group’s recent advances in convention, communication, and context-based learning. I will discuss these techniques within a wide variety of robotic applications, such as multi-agent path finding, autonomous exploration/search, task allocation, and legged locomotion. Finally, I will also touch on our recent incursion into the next frontier for multi-robot systems: cooperation learning for heterogeneous multi-robot teams. Throughout this journey, I will highlight the key challenges surrounding learning representations, policy space exploration, and scalability of the learned policies, and outline some of the open avenues for research in this exciting area of robotics.