Skip to main content

Continual Learning of AI

This chapter provides a comprehensive review of continual learning -- the ability of neural networks to learn from non-stationary data streams while retaining previously acquired knowledge. We cover the problem formulation (including task-incremental, class-incremental, and domain-incremental settings), a five-family taxonomy of approaches, and deep dives into regularization-based methods (EWC, SI, LwF, MAS, and bias correction), replay-based methods (experience replay, gradient-constrained replay, distillation-enhanced replay, and generative replay), architecture-based methods (parameter isolation, dynamic expansion, and modular networks), meta-continual learning (OML, ANML, La-MAML), and the emerging paradigm of prompt-based continual learning (L2P, DualPrompt, CODA-Prompt). We also examine the rapidly evolving intersection of continual learning with large language models -- including continual pre-training, knowledge editing, parameter-efficient methods, and model merging. We discuss standard benchmarks, evaluation protocols and their pitfalls, and conclude with open problems and connections to other chapters in this survey.