This was a hybrid event with in-person attendance in Levine 307 and virtual attendance…
The rise of recent Foundation models (and applications e.g. ChatGPT) offer an exciting glimpse into the capabilities of large deep networks trained on Internet-scale data. They hint at a possible blueprint for building generalist robot brains that can do anything, anywhere, for anyone. Nevertheless, robot data is expensive – and until we can bring robots out into the world (already) doing useful things in unstructured places, it will be challenging to match the same amount of diverse data being used to train e.g. large language models today. In this talk, I will briefly discuss some of the lessons we’ve learned while scaling real robot data collection, how we’ve been thinking about Foundation models, and how we might bootstrap off of them (and modularity) to make our robots useful sooner.