Abstract
We would like to augment the basic abilities of a robot by learning to use new sensorimotor primitives (skills) to enable the solution of complex long-horizon problems. However, solving long-horizon problems in complex domains requires flexible generative planning that can combine primitive abilities in novel combinations to solve problems as they arise in the world. In order to plan to combine primitive actions, we must have models of the preconditions and effects of those actions: under what circumstances will executing this primitive achieve some particular effect in the world?
This talk will describe methods for learning the conditions of operator effectiveness from small numbers of expensive training examples collected by experimentation on a robot. I’ll demonstrate these methods in an integrated system, combining newly learned models with an efficient continuous-space robot task and motion planner to learn to solve long horizon problems.