Continual learning with a predictive coding based classifier
Continual Learning (CL) is the problem of learning multiple tasks sequentially. Several effective CL algorithms using Deep Neural Networks (DNNs) have been developed. However, the problem of reducing the computational requirements of the CL algorithms has not received enough attention. Computational...
Saved in:
| Published in: | Applied soft computing Vol. 186; p. 114265 |
|---|---|
| Main Authors: | , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Elsevier B.V
01.01.2026
|
| Subjects: | |
| ISSN: | 1568-4946 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Continual Learning (CL) is the problem of learning multiple tasks sequentially. Several effective CL algorithms using Deep Neural Networks (DNNs) have been developed. However, the problem of reducing the computational requirements of the CL algorithms has not received enough attention. Computationally efficient training methods are important for CL because these models potentially undergo training throughout their lifetime. There is a need to build efficient CL methods that can be used on a broad range of devices. Predictive Coding (PC) is a hypothesis about information processing in the brain. The underlying principle is that the PC model predicts the activity of adjacent layers and updates the model in parallel using local errors between predicted and actual neuron activities, potentially improving the efficiency of CL. This paper proposes a new Continual Learning method using a Predictive Coding based Classifier (CLPC2). CLPC2 trains a PC-based classifier with replay samples generated using a Variational Autoencoder (VAE) or Diffusion (Dif). The performance of CLPC2 is evaluated in three CL scenarios: Class Incremental Learning (Class-IL), Domain Incremental Learning (Domain-IL) and Task Incremental Learning (Task-IL) using the split MNIST, CIFAR-10, and CIFAR-100 datasets. Compared with existing CL methods, CLPC2 achieves higher average classification accuracy in Class-IL and Domain-IL scenarios on MNIST and CIFAR-10 datasets, while obtaining comparable performance on the more challenging CIFAR-100 dataset. The key advantage of the proposed method is the ability to perform training in the classifier using locally computed errors.
•A novel computationally efficient Continual Learning (CL) method, termed Continual Learning with a Predictive Coding based Classifier (CLPC2), which enables incremental learning of new tasks using a generative replay strategy while supporting parallel learning.•The application of the proposed algorithm to two different architectures, namely Fully Connected Networks (CLPC2-FCN) and Convolutional Neural Networks (CLPC2-CNN).•A comprehensive evaluation and comparison of CLPC2’s performance with existing CL methods on the MNIST, CIFAR-10, and CIFAR-100 datasets in three different CL scenarios. The results show that CLPC2 achieves higher average classification accuracy in the challenging scenarios of Class-IL and Domain-IL on MNIST and CIFAR-10, while also offering benefits such as support for parallel learning. |
|---|---|
| ISSN: | 1568-4946 |
| DOI: | 10.1016/j.asoc.2025.114265 |