This was a hybrid event with in-person attendance in Levine 307 and virtual attendance…
Event cameras are bio-inspired sensors that perceive the environment in an entirely different way. Instead of measuring synchronous frames of absolute intensity at fixed intervals, they only measure changes in intensity and do this independently for each pixel, resulting in an asynchronous stream of events. Events thus carry only the compressed visual signal but do this with a micro-second-level latency and temporal resolution, negligible motion blur, and high dynamic range while consuming low power and using low bandwidth. However, due to their working principle, event cameras output sparse and asynchronous data, which are not directly compatible with standard computer vision algorithms designed for dense frames. Therefore the development of new algorithms to process events, and leverage the advantages of these cameras is at the forefront of active research in event-based vision. In this talk, we will discuss ways to leverage the advantages of event cameras for high-speed robotics, and low-data computational photography. Finally, we will touch upon ways to enhance the efficiency of deep learning-based algorithms with novel asynchronous neural networks that take advantage of the spatiotemporal sparsity in event data.