*This was a HYBRID Event with in-person attendance in Levine 512 and Virtual attendance…
While machine learning algorithms have led to tremendous improvements in many multi-agent domains, scalability remains one of the major challenges for multi-agent decision-making. In this talk, we will focus on two aspects of the scalability challenge: (i) number of agents, and (ii) large state space. We will propose possible approaches to remedy both challenges. In the first part, we introduce the mean-field approximation, which simplifies the interactions among a large population of agents. We will present theoretical analysis and convergence results on a class of entropy-regularized mean-field games with optimality bounds. In the second part, we address the large state space issue using two ideas: first, the use of hierarchical decomposition to decompose the original game to a number of smaller games; and second, the approximation of expensive operators (e.g., minimax) to reduce computation time in multi-agent reinforcement learning. Convergence analysis and application to pursuit-evasion games will also be discussed.