Universal probabilistic programming languages (PPL) allow language features such as stochastic branching which results in probabilistic models with stochastic support. We argue that naively applying Bayesian inference in these models can be misguided, and will often yield inference results that are unstable and overconfident. The root cause of this problem is that the posterior of these programs is essentially a Bayesian Model Average (BMA) over the program’s constituent straight-line programs (SLP), whereby each SLP can be viewed as a separate model. We present initial work for an alternative to the “full Bayes” posterior which is based on the idea of stacking from the statistics and machine learning literature.
Pitfalls of Full Bayesian Inference in Universal Probabilistic Programming (lafi23-final83.pdf)