GSA Connects 2021 in Portland, Oregon

Paper No. 71-8
Presentation Time: 10:15 AM

LEARNING TO REGULARIZE WITH A VARIATIONAL AUTOENCODER FOR HYDROGEOLOGIC INVERSE ANALYSIS


WU, Hao1, O'MALLEY, Daniel1, GOLDEN, John2 and VESSELINOV, Velimir3, (1)Earth and Environmental Science Devision, Los Alamos National Laboratory, Los Alamos, NM 87544, (2)Computer, computational, statistical sciences devision, Los Alamos National Laboratory, Los Alamos, NM 87544, (3)Computational Earth Science, Los Alamos National Laboratory, Los Alamos, NM 87545

Inverse problems often involve matching observational data using a physical model that takes many parameters as input. These problems tend to be under-constrained and require regularization to impose additional structure on the solution in parameter space. A central difficulty in regularization is turning a complex conceptual model of this additional structure into a functional mathematical form to be used in the inverse analysis. In this work, we propose a method of regularization involving a machine learning technique known as a variational autoencoder (VAE). The VAE is trained to map a low-dimensional set of latent variables with a simple structure to the high-dimensional parameter space with a complex structure. We train a VAE on unconditioned realizations of the parameters for a hydrogeological inverse problem. These unconditioned realizations neither rely on the observational data used to perform the inverse analysis nor require any forward runs of the groundwater flow model, thus making the computational cost of generating the training data minimal. The central benefit of this approach is that regularization is then performed on the latent variables from the VAE, which can be regularized simply. The second benefit of this approach is that the VAE reduces the number of variables in the optimization problem, thus making gradient-based optimization more computationally efficient when adjoint methods are unavailable. Prior work on VAEs in the context of inverse analysis has focused on this second benefit, but this benefit goes away when adjoint methods are available. Since automatic diā†µerentiation (and hence adjoint methods) is increasingly available due to the rise of machine learning tools, regularization will serve as the long-term benefit of these methods. Our approach constitutes a novel framework for regularization and optimization, readily applicable to a wide range of inverse problems.