Thursday 21 Jan 2016: Statistical Science Seminar: Posterior Belief Assessment: extracting meaningful subjective judgements from Bayesian analyses of complex statistical models
Danny Williamson - University of Exeter
In a Bayesian analysis of any reasonable complexity, many, if not all of the prior and likelihood judgements we specify in order to make progress are not believed (or owned) by either analyst or subject expert. In what sense then, should we be able to attribute meaning to a large sample from the posterior distribution? Foundationally, is the posterior distribution a probability distribution at all and, if not, what is it and what can it be used for?
In this talk I present a paper that appeared in the Lindley prize edition of Bayesian Analysis last December which introduces a methodology for extracting judgements for key quantities from a large Bayesian analysis. I call this Posterior Belief Assessment and it is based on the idea that there are many other Bayesian analyses that you might have performed (where, for example, you used different prior/model forms for sub-components of the statistical model). We impose forms of exchangeability and co-exchangeability over key derived posterior quantities under each of these theoretical Bayesian analyses and use these, a handful of alternative analyses and temporal sure preference to derive posterior judgements that we show are closer to what de Finetti termed prevision than the corresponding judgements from your original analysis. We argue that posterior belief assessment is a tractable and powerful alternative to robust Bayesian analysis and illustrate with an example of calibrating an expensive ocean model in order to quantify uncertainty about global mean temperature in the real ocean.