Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Feb;73(1):23-43.
doi: 10.1111/bmsp.12159. Epub 2019 Feb 22.

Towards end-to-end likelihood-free inference with convolutional neural networks

Affiliations

Towards end-to-end likelihood-free inference with convolutional neural networks

Stefan T Radev et al. Br J Math Stat Psychol. 2020 Feb.

Abstract

Complex simulator-based models with non-standard sampling distributions require sophisticated design choices for reliable approximate parameter inference. We introduce a fast, end-to-end approach for approximate Bayesian computation (ABC) based on fully convolutional neural networks. The method enables users of ABC to derive simultaneously the posterior mean and variance of multidimensional posterior distributions directly from raw simulated data. Once trained on simulated data, the convolutional neural network is able to map real data samples of variable size to the first two posterior moments of the relevant parameter's distributions. Thus, in contrast to other machine learning approaches to ABC, our approach allows us to generate reusable models that can be applied by different researchers employing the same model. We verify the utility of our method on two common statistical models (i.e., a multivariate normal distribution and a multiple regression scenario), for which the posterior parameter distributions can be derived analytically. We then apply our method to recover the parameters of the leaky competing accumulator (LCA) model and we reference our results to the current state-of-the-art technique, which is the probability density estimation (PDA). Results show that our method exhibits a lower approximation error compared with other machine learning approaches to ABC. It also performs similarly to PDA in recovering the parameters of the LCA model.

Keywords: approximate Bayesian computation; convolutional network; leaky competing accumulator; likelihood-free inference; machine-learning.

PubMed Disclaimer

Similar articles

Cited by

References

    1. Blum, M. G. (2010). Approximate Bayesian computation: A nonparametric perspective. Journal of the American Statistical Association, 105, 1178-1187. https://doi.org/10.1198/jasa.2010.tm09448
    1. Blum, M. G., & François, O. (2010). Non-linear regression models for Approximate Bayesian Computation. Statistics and Computing, 20, 63-73. https://doi.org/10.1007/s11222-009-9116-0
    1. Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. https://doi.org/10.1023/a:1010933404324
    1. Chollet, F. (2017). Deep learning with Python. Shelter Island, New York: Manning Publications Co.
    1. Cybenko, G. (1989). 1Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, 2, 303-314. https://doi.org/10.1007/bf02551274

LinkOut - more resources