ABSTRACT
To enable the next generation of smart robots and devices which can truly interact with their environments, Simultaneous Localisation and Mapping (SLAM) will progressively develop into a vision-driven geometric and semantic `Spatial AI’ perception capability which gives devices the real-time dynamic model in which to reason in intuitive and intelligent ways about their actions. A fundamental issue is the algorithmic architecture which will enable estimation and machine learning components to come together to enable efficient, incremental updating of scene representation, and I believe that graph strucures where storage and computation come together will be the key. New computing and sensing hardware is now becoming available which makes research in this direction a reality.