Uncertainty Quantification in Bayesian Calibration of Computer Models
In a Bayesian framework, the objective is to capture the true model parameters distribution via an iterative process of model calibration. The solutions to this iterative process are later ranked based on their quality, such that the models with lower misfit values gain a higher rank. The choice of the misfit function is highly linked with the underlying assumption on the likelihood of the models being calibrated.
Our research aims to examine qualitatively as well as quantitatively a new likelihood function by accounting for all traceable sources of errors such as model inadequacy.
Prof. Mike Christie and Dr. Vasily Demyanov