Nonparametric Bayesian Structure Adaptation for Continual Learning
Authors: Abhishek Kumar, Sunabha Chatterjee, Piyush Rai
Abstract: Continual Learning is a learning paradigm where machine learning models are trained with sequential or streaming tasks. Two notable directions among the recent advances in continual learning with neural networks are (i) variational Bayes based regularization by learning priors from previous tasks, and, (ii) learning the structure of deep networks to adapt to new tasks. So far, these two approaches have been orthogonal. We present a principled non-parametric Bayesian approach for learning the structure of feed-forward neural networks, addressing the shortcomings of both these approaches. In our model, the number of nodes in each hidden layer can automatically grow with the introduction of each new task, and inter-task transfer occurs through the overlapping of different sparse subsets of weights learned by different tasks. On benchmark datasets, our model performs comparably or better than the state-of-the-art approaches, while also being able to adaptively infer the evolving network structure in the continual learning setting.
Explore the paper tree
Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant
Look for similar papers (in beta version)
By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.