This was a hybrid event with in-person attendance in Wu and Chen and virtual attendance…
In the last few years, the ability for robots to understand and operate in the world around them has advanced considerably. Examples include the growing number of self-driving car systems, the considerable work in robot mapping, and the growing interest in home and service robots. However, one limitation is that robots most often reason and plan using very geometric models of the world, such as point features, dense occupancy grids and action cost maps. To be able to plan and reason over long length and timescales, as well as planning more complex missions, robots need to be able to reason simultaneously about the abstract representations needed to support task planning, as well as the detailed geometry for reliable motion. I will talk about recent work in joint reasoning about semantic representations and physical representations and what these joint representations mean for planning and decision making.