This was a hybrid event with in-person attendance in Levine 307 and virtual attendance…
ABSTRACT
As AI-enabled systems become integral to critical domains, their robustness is increasingly tested by dynamic environments, continual learning, and inferential uncertainty. Whether an AI proxy informs high-stakes medical decisions or an embodied agent relies on a foundation model to reason across modalities, today’s training and deployment methodologies remain inherently fragile. This fragility often stems from a reliance on stationarity assumptions, overly symmetric training paradigms, and a failure to account for other adapting agents—leading to systems that generalize poorly, misestimate uncertainty, and break down in interactive settings.
This talk presents recent theoretical contributions and algorithmic design principles for robust inference and influence when reasoning with algorithmic agents. In particular, it explores how tools from control and game theory—when integrated into machine learning, and vice versa—enable uncertainty adaptation and the synthesis of decision-making strategies for influencing algorithmic agents. Through motivating examples, the talk will illustrate how bridging these disciplines leads to more robust AI systems that can reason, adapt, and interact effectively in complex, non-stationary environments. The first part will focus on algorithms with non-asymptotic convergence guarantees in time-varying settings with a hierarchical game structure. The second part will address uncertainty quantification and adaptation in safety-critical, multi-agent, embodied AI systems. The talk will conclude with a discussion of open questions and future directions.