Bayesian Inference Forgetting
Authors: Shaopeng Fu, Fengxiang He, Yue Xu, Dacheng Tao
Abstract: The right to be forgotten has been legislated in many countries but the enforcement in machine learning would cause unbearable costs: companies may need to delete whole models trained from massive resources because of single individual requests. Existing works propose to remove the influence of the requested datums on the learned models via its influence function which is no longer naturally well-defined in Bayesian inference. To address this problem, this paper proposes a {\it Bayesian inference forgetting} (BIF) framework to extend the applicable domain to Bayesian inference. In the BIF framework, we develop forgetting algorithms for variational inference and Markov chain Monte Carlo. We show that our algorithms can provably remove the influence of single datums on the learned models. Theoretical analysis demonstrates that our algorithms have guaranteed generalizability. Experiments of Gaussian mixture models on the synthetic dataset and Bayesian neural networks on the Fashion-MNIST dataset verify the feasibility of our methods. The source code package is available at \url{https://github.com/fshp971/BIF}.
Explore the paper tree
Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant
Look for similar papers (in beta version)
By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.