This will be a hybrid event with in-person attendance in Levine 307 and virtual attendance on Zoom.
From autonomous vehicles navigating busy intersections to quadrupeds deployed in household environments, robots must operate safely and efficiently around people in uncertain and unstructured situations. However, today’s robots still struggle to robustly handle low-probability events without becoming overly conservative. In this talk, I will discuss how planning in the joint space of physical and information states (e.g., beliefs) enables robots to make safe, adaptive decisions in human-centered scenarios. I will begin by introducing a unified safety filter framework that combines robust safety analysis with probabilistic reasoning to enable trustworthy human–robot interaction. I will discuss how robots can reduce conservativeness without compromising safety by closing the interaction–learning loop. Next, I will show how game-theoretic reinforcement learning tractably synthesizes a safety filter for high-dimensional systems, guarantees training convergence, and reduces the policy’s exploitability. Finally, I will present an algorithmic approach to scaling up game-theoretic planning for resolving conflicts and optimizing social welfare for strategic interactions involving many agents. I will conclude with a vision for next-generation human-centered robotic systems that actively align with their human peers and enjoy verifiable safety assurances.