This will be a hybrid event with in-person attendance in Levine 307 and virtual attendance on Zoom.
Partial observability is ubiquitous in Reinforcement Learning (RL) applications, where agents must make sequential decisions despite lacking complete information about the latent states of the controlled system. Partially observable RL is notoriously challenging in theory—well-known information-theoretic results show that learning partially observable Markov decision processes (POMDPs) requires an exponential number of samples in the worst case. However, this does not rule out the existence of interesting subclasses of POMDPs that encompass a diverse set of practical applications while remaining tractable.
In this talk, we identify a rich family of tractable POMDPs, which we call weakly revealing POMDPs. This family excludes pathological cases where observations provide so little information that learning becomes infeasible. We prove that for weakly revealing POMDPs, a simple algorithm combining optimism and Maximum Likelihood Estimation (MLE) is sufficient to guarantee polynomial sample complexity. Finally, we discuss the practical implications of this theory, including strategies for collecting samples in partially observable tasks and the limitations of purely model-free algorithms.