Abstract: The Bayes criterion is generally
regarded as the holy grail in classification because, for known
distributions, it leads to the smallest possible classification error.
Unfortunately, the Bayes classification
boundary is generally nonlinear and its associated error can only be
calculated under unrealistic assumptions. In this talk, we will show how
these obstacles can be readily and efficiently averted yielding Bayes
optimal algorithms in machine learning, statistics,
computer vision and others. We will first derive Bayes optimal
solutions in Discriminant Analysis. We will then extend the notion of
homoscedasticity (meaning of the same variance) to
spherical-homoscedasticity (meaning of the same variance up to a
rotation)
and show how this allows us to generalize the Bayes criterion beyond
previously defined domains. This will lead to a new concept of kernel
mappings with applications in classification (machine learning), shape
analysis (statistics), and structure from motion
(computer vision). We will conclude with an outline of ongoing research
for nonparametric kernel learning.