This will be a hybrid event with in-person attendance in Wu and Chen and virtual attendance on Zoom.
Motivated by the ever-increasing success of machine learning in language and vision models, many aim to build AI-driven tools for scientific simulation and discovery. Contemporary techniques drastically lag behind their comparatively mature counterparts in modeling and simulation however, lacking rigorous notions of convergence, physical realizability, uncertainty quantification, and verification+validation that underpin prediction in high-consequence engineering settings. One reason for this is the use of “off-the-shelf” ML architectures designed for language/vision without specialization to scientific computing tasks. In this work, we establish connections between graph neural networks and the finite element exterior calculus (FEEC). FEEC forms the backbone of modern mixed finite element methods, tying the discrete topology of geometric descriptions of space (cells, faces, edges, nodes and their connectivity) to the algebraic structure of conservations laws (the div/grad/curl theorems of vector calculus). By building a differentiable learning architecture mirroring the construction of Whitney forms, we obtain a de Rham complex supporting FEEC, allowing us to learn models combining the robustness of traditional FEM with the drastic speedups and data assimilation capabilities of ML. We then introduce a novel UQ framework based on optimal recovery in reproducing Hilbert spaces, allowing the model to quantify epistemic uncertainty, providing practical notions of trust where the model may be reliably employed.