This was a hybrid event with in-person attendance in Wu and Chen and virtual attendance…
A fundamental element of effective operation of autonomous systems is the need for appropriate sensing and processing of measurements to enable desired system actions. Model-based methods provide a clear framework for careful proof of system capabilities but suffer from mathematical complexity and lack of scaling as probabilistic structure is incorporated. Conversely, learning methods provide viable results in probabilistic and stochastic structures, but they are not generally amenable to rigorous proof of performance. A key point about learning systems is that the results are based on use of a set of training data, and those results effectively lie in the convex hull of the training data. This presentation will focus on use of model-based nonlinear empirical observability criteria to assess and improving and bounding performance of learning pose (position and orientation) of rigid bodies from computer vision. A particular question to be addressed is what sensing data should be captured to best improve the existing training data. The particular tools to be leveraged here focus on the use of empirical observability gramian techniques being developed for nonlinear systems where sensing and actuation are coupled in such a way that the separation principle of linear methods does not hold. These ideas will be discussed relative to both engineering applications in the form of motion planning for range and bearing only navigation in autonomous vehicles, vortex position and strength estimation from pressure measurements on airfoils, and effective strain sensor placement on insect wings for inertial measurements.