An ensemble variational filter for sequential inverse problems
C L Farmer (Mathematical Institute, University of Oxford, UK)

Given a model dynamical system, a model of any measuring apparatus relating states to observations, and a prior assessment of uncertainty, the probability density of subsequent system states, conditioned upon the history of the observations, is of some practical interest. When observations are made at discrete times, it is known that the evolving probability density is a solution of the Bayesian filtering equations. This paper describes the difficulties in approximating the evolving probability density using a Gaussian mixture (i.e. a sum of Gaussian densities). In general this leads to a sequence of optimisation problems and related high-dimensional integrals. There are other problems too, related to the necessity of using a small number of densities in the mixture, the requirement to maintain sparsity of any matrices and the need to compute first and second derivatives of the misfit between predictions and observations. Adjoint methods, Taylor expansions, Gaussian random fields and Newton's method can be combined to, possibly, provide a solution. The approach uses ensemble filtering and `4-D Var' methods, and employs the dynamical model to determine the covariance matrices rather than the ensemble. Sparsity of the matrices enables the method to approximate the solutions of high-dimensional sequential inverse problems.