ABSTRACT
Advances in machine learning have made it possible to build algorithms that can make complex and accurate inferences for open-world perception problems, such as recognizing objects in images or recognizing words in human speech. These advances have been enabled by improvements in models and algorithms, such as deep neural networks, advances in the amount of available computation and, crucially, the availability of large amounts of manually-labeled data. However, when we consider how we might build intelligent machines that can act, rather than just perceive, the requirement for massive human-labeled data becomes onerous and, in many cases, prohibitive. In this talk, I will discuss research in my group that aims to make learning fully autonomous, by enabling robots to improve continuously from experience that they collect on their own, either by attempting tasks in the real world, or simply by watching humans acting in their natural environment.