*This was a HYBRID Event with in-person attendance for Dr. Fermüller’s in-person talk in Berger Auditorium and Virtual attendance via Zoom Webinar…
Visual motion is a powerful cue that any animal uses, but computational vision has not fully taken advantage of it. Classically, Computer Vision and Robotics seeks to reconstruct models of the world by first computing from consecutive video frames the displacement of image points or the optical flow, and then computing from these measurements 3D motion and scene geometry. Different from this approach, I have explored the cue of visual motion along three different directions. First, using neuromorphic event-based sensors which do not record image frames but temporal information about scene changes, we obtain data in form of point clouds in x-y-t space that approximates continuous motion. By taking advantage of the density of the data at motion boundaries, we developed algorithms for motion segmentation for the most challenging scenarios. Second, by changing the classical sequence of computations, and estimating 3D motion from robust image motion along gradients before introducing regularization constraints for 3D scene reconstruction, we have developed classical optimization and machine learning algorithms that are more robust and generalize better to new scenarios. Third, I show experiments on visual illusions that give an indication of the motion computations in the early visual processes in nature and point to directions for improving current motion computations.