ABSTRACT
Imagine an unmanned aerial vehicle that successfully navigates a thousand different obstacle environments or a robotic manipulator that successfully grasps a million objects in our dataset. How likely are these systems to succeed on a novel (i.e., previously unseen) environment or object? How can we learn control policies for robotic systems that provably generalize well to environments that our robot has not previously encountered? Unfortunately, current state-of-the-art approaches either do not generally provide such guarantees or do so only under very restrictive assumptions. This is a particularly pressing challenge for robotic systems with rich sensory inputs (e.g., vision) that employ neural network-based control policies.
In this talk, I will present approaches for learning control policies for robotic systems that provably generalize well with high probability to novel environments. The key technical idea behind our approach is to leverage tools from generalization theory (e.g., PAC-Bayes theory) and the theory of information bottlenecks. We apply our techniques on examples including navigation and grasping in order to demonstrate the potential to provide strong generalization guarantees on robotic systems with complicated (e.g., nonlinear) dynamics, rich sensory inputs (e.g., RGB-D), and neural network-based control policies.