Abstract: In this talk, we will present an extension of our previous semantic
mapping methods [1] with means to acquire models that extend the
information content of semantic maps such that they can answer the
following categories of queries: “What do parts of the kitchen look
like?”, “How can a container be opened and closed?”, “Where do objects
of daily use belong?”, “What is inside of cupboards/drawers?”, etc.
These are the kinds of information that are required for fetch and
delivery applications in the household domain or factory domain
respectively. Besides the information content of the environment
models, the research presented in this talk also advances the
mechanisms for acquiring such semantic maps substantially. Instead of
acquiring the maps with a more accurate but slower tilting laser
scanner, we use the inexpensive but more limited Kinect RGBD sensor
that allows for much faster environment model acquisition and enables
the acquisition of visual environment representations.
We also generalized the perception methods, including handle detection
and recognition, such that they are not specific to particular
environments.
Technically the talk will be about an end-to-end indoor environment mapping system, and will feature the work that
resulted as a collaboration between researchers from TUM, Bosch RTC Palo Alto, Willow Garage and ALU Freiburg.
The video of the functional system is available online: http://youtu.be/KHju7IH8nck.
In the end we will briefly discuss relation to other projects such as
the detection and recognition of objects of daily use and knowledge
representation and reasoning for personal robots.