Abstract: Many problems in robotics have unknown, stochastic, high-dimensional, and highly non-linear dynamics, and offer significant challenges to classical control methods. Some of the key difficulties in these problems are that (i) It is often hard to write down, in closed form, a formal specification of the control task (for example, what is the objective function for “flying well”?), (ii) It is difficult to build a good dynamics model because of both data collection and data modeling challenges (similar to the “exploration problem” in reinforcement learning), and (iii) It is expensive to find closed-loop controllers for high dimensional, stochastic domains. In this talk, I will present learning algorithms with formal performance guarantees which show that these problems can be efficiently addressed in the apprenticeship learning setting—the setting when expert demonstrations of the task are available. I will also present how our apprenticeship learning techniques have enabled us to solve real-world control problems that could not be solved before: They have enabled a quadruped robot to traverse challenging terrain, and a helicopter to perform by far the most challenging aerobatic maneuvers performed by any autonomous helicopter to date, including maneuvers such as chaos and tic-tocs, which only exceptional expert human pilots can fly.