This will be a hybrid event with in-person attendance in Wu and Chen and virtual attendance on Zoom.
I will talk about two ways we can design agents with the help of powerful vision/graphics models. In the first project, LucidSim, we augment a traditional robotics simulation engine (MuJoCo) with visual detail from an image generative model. The generator adds diversity and realism to the barebones MuJoCo content, and results in a RGB-only policy trained entirely in sim that generalizes zero-shot to the real world. In the second project, ASAL, we use a visual recognition model to search for artificial lifeforms that display distinct and interesting behaviors. This process can discover cellular automata that are open-ended like Conway’s Game of Life, particle swarms that flock like Boids, and more.