Task-free continual generative modelling via dynamic teacher-student framework

•A novel teacher-student framework for lifelong generative modeling undet Task-Free Continual Learning•The Knowledge Incremental Assimilation Mechanism (KIAM) representing a novel Teacher expansion approach•A new data-free Knowledge Distillation (KD) learning approach is introduced to transfer the g...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Expert systems with applications Ročník 298; s. 129873
Hlavní autori: Ye, Fei, Bors, Adrian G.
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Elsevier Ltd 01.03.2026
Predmet:
ISSN:0957-4174
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:•A novel teacher-student framework for lifelong generative modeling undet Task-Free Continual Learning•The Knowledge Incremental Assimilation Mechanism (KIAM) representing a novel Teacher expansion approach•A new data-free Knowledge Distillation (KD) learning approach is introduced to transfer the generative knowledge from the Teacher to a Student module in an online manner.•We introduce a novel expert pruning approach, aiming to compress the Teacher mixture model. Continually learning and acquiring new concepts from a dynamically changing environment is an important requirement for an artificial intelligence system. However, most existing deep learning methods fail to achieve this goal and suffer from significant performance degeneration under continual learning. We propose a new unsupervised continual learning framework combining Long- and Short-Term Memory management for training deep learning generative models. The former memory system employs a dynamic expansion model (Teacher), while the latter uses a fixed-capacity memory buffer to store the update-to-date information. A novel Teacher model expansion approach, called the Knowledge Incremental Assimilation Mechanism (KIAM) is proposed. KIAM evaluates the probabilistic distance between the already accumulated information and that from the Short Term Memory (STM). The proposed KIAM adaptively expands the Teacher’s capacity and promotes knowledge diversity among the Teacher’s experts. As Teacher experts, we consider generative deep learning models such as : the Variational Autocencoder (VAE), the Generative Adversarial Network (GAN) or the Denoising Diffusion Probabilistic Model (DDPM). We also extend the KIAM-based model to a Teacher-Student framework in which we use a data-free Knowledge Distillation (KD) process to train a VAE-based Student without using any task information. The results on Task Free Continual Learning (TFCL) benchmarks show that the proposed approach outperforms other models.
ISSN:0957-4174
DOI:10.1016/j.eswa.2025.129873