Abstract: Despite considerable progress in all aspects of machine perception,
using machine vision in autonomous systems remains a formidable
challenge. This is especially true in applications such as robotics, in
which even a small error rate in the perception system can have
catastrophic consequences for the overall system. This talk will review a few ideas that could be used to start
formalizing the issues revolving around the integrating vision systems.
They include a systematic approach to the problem of self-assessment of
vision algorithm and predicting quality metrics on the inputs to the
vision algorithms, ideas on how to manage multiple hypotheses generated
from a vision algorithm rather than relying on a single “hard” decision,
and methods for using external (non-visual) domain- and task-dependent
information. These ideas will be illustrated with examples of recent
vision for scene understanding, depth estimation, and object recognition.