Abstract
There are lots of cognitive functions required for affordable cognitive robots. Among them, I will introduce my research experiences on two cognitive functions: skill learning and visual navigation. It may be hard to preprogram all behaviors or skills, which are required for robots to do a daily-life task, because it is almost impossible to predict what will happen while doing such a daily-life task. Therefore, a capacity of programming by learning has to be included in robots. There have been proposed lots of methods for skill learning by imitation. However, there are few researches to explicitly analyze what and where have to be attended for satisfactory task execution. In this talk, I will introduce motion complexity which will be a measure to find what portion of motion trajectory has to be attended. And then, primitive motions are extracted from motion demonstration by complexity-based motion analysis. These motions are represented by Bayesian networks together with pre-and post-conditions, where conditions are also found and associated with their corresponding motion primitives by the same complexity-based motion analysis. To validate the proposed approach, several experimental results are provided, where a robot arm is employed to learn and execute several daily-life tasks. On the other hand, Visual localization and navigation is another key function for indoor GPS-free cognitive robots. There are lots of visual SLAM techniques, but it seems to me that those techniques are not affordable for localization and navigation due to computational complexity and expensive sensor hardware. In this talk, I will introduce a scene-based localization and navigation technique, where a cheap Kinect sensor and/or one or two web cameras are employed. And, I will show the robot can go to its any destination place in very challenging environments.