This was a hybrid event with in-person attendance in Wu and Chen and virtual attendance. This seminar was NOT recorded.
Modeling 3D objects and scenes creates many challenges, both on the analysis and the synthesis sides. The focus of this talk is the compositional structure of objects (into parts) and of scenes (into objects) — and how such structure informs the modeling of 3D geometry, affordances, and functionality. We discuss a number of neural representations for 3D objects and scenes that are, or can be made to be, structure aware, enabling more efficient reconstruction as well as the creation of variations, both discrete and continuous. The talk will cover recent historical developments of modeling ideas through works that address how to leverage 2D and 3D sensor data (static or dynamic), direct human annotations on 2D images or 3D models, free-form language utterances, and finally physical simulation, all towards learning and exploiting this compositional structure for applications ranging from 3D content creation to robotic manipulation.