Calibrating Neural Simulation-Based Inference with Differentiable Coverage Probability
Saved in:
| Title: | Calibrating Neural Simulation-Based Inference with Differentiable Coverage Probability |
|---|---|
| Authors: | Falkiewicz, Maciej, Takeishi, Naoya, Shekhzadeh, Imahn, Wehenkel, Antoine, Delaunoy, Arnaud, Louppe, Gilles, Kalousis, Alexandros |
| Source: | Advances in Neural Information Processing Systems (2023-12); Advances in Neural Information Processing Systems, New Orleans, United States - Louisiana [US-LA], December 2023 |
| Publication Year: | 2023 |
| Subject Terms: | Statistics - Machine Learning, Computer Science - Learning, Engineering, computing & technology, Computer science, Ingénierie, informatique & technologie, Sciences informatiques |
| Description: | Bayesian inference allows expressing the uncertainty of posterior beliefunder a probabilistic model given prior information and the likelihood of theevidence. Predominantly, the likelihood function is only implicitly establishedby a simulator posing the need for simulation-based inference (SBI). However,the existing algorithms can yield overconfident posteriors (Hermans *et al.*,2022) defeating the whole purpose of credibility if the uncertaintyquantification is inaccurate. We propose to include a calibration term directlyinto the training objective of the neural model in selected amortized SBItechniques. By introducing a relaxation of the classical formulation ofcalibration error we enable end-to-end backpropagation. The proposed method isnot tied to any particular neural model and brings moderate computationaloverhead compared to the profits it introduces. It is directly applicable toexisting computational pipelines allowing reliable black-box posteriorinference. We empirically show on six benchmark problems that the proposedmethod achieves competitive or better results in terms of coverage and expectedposterior density than the previously existing approaches. |
| Document Type: | conference paper http://purl.org/coar/resource_type/c_5794 conferenceObject peer reviewed |
| Language: | English |
| Relation: | urn:issn:1049-5258 |
| DOI: | 10.48550/arXiv.2310.13402 |
| Access URL: | https://orbi.uliege.be/handle/2268/308498 |
| Rights: | open access http://purl.org/coar/access_right/c_abf2 info:eu-repo/semantics/openAccess |
| Accession Number: | edsorb.308498 |
| Database: | ORBi |
| Abstract: | Bayesian inference allows expressing the uncertainty of posterior beliefunder a probabilistic model given prior information and the likelihood of theevidence. Predominantly, the likelihood function is only implicitly establishedby a simulator posing the need for simulation-based inference (SBI). However,the existing algorithms can yield overconfident posteriors (Hermans *et al.*,2022) defeating the whole purpose of credibility if the uncertaintyquantification is inaccurate. We propose to include a calibration term directlyinto the training objective of the neural model in selected amortized SBItechniques. By introducing a relaxation of the classical formulation ofcalibration error we enable end-to-end backpropagation. The proposed method isnot tied to any particular neural model and brings moderate computationaloverhead compared to the profits it introduces. It is directly applicable toexisting computational pipelines allowing reliable black-box posteriorinference. We empirically show on six benchmark problems that the proposedmethod achieves competitive or better results in terms of coverage and expectedposterior density than the previously existing approaches. |
|---|---|
| DOI: | 10.48550/arXiv.2310.13402 |