Homeostasis-Inspired Continual Learning
Learning continually without forgetting might be one of the ultimate goals for building artificial intelligence (AI). However, unless there are enough resources equipped, forgetting knowledge acquired in the past is inevitable. Then, we can naturally pose a fundamental question about how to control what knowledge and how much of it to forget to improve the overall accuracy. To give a clear answer to it, we propose a novel trainable network termed homeostatic meta-model. The proposed neuromorphic framework is a natural extension of the conventional concept Synaptic Plasticity (SP) for further optimizing the accuracy of continual learning. In the preceding works on SP and its variations, though they seek important network parameters for structural regularization, they care less about the intensity of regularization (IoR). Per contra, this work reveals that a careful selection of IoR during continual training can remarkably improve the accuracy of tasks. The proposed method balances IoR between newly learned knowledge and the previously-acquired ones rather than biasing it to a specific task or evenly balancing. To obtain effective and optimal IoRs for the real-time continual learning circumstances, we propose a homeostasis-inspired meta learning architecture. The proposed meta-model automatically controls the IoRs by capturing important parameters from the previous tasks and the current learning direction. We provide experimental results considering various types of continual learning tasks showing that the proposed method notably outperforms the conventional methods in terms of learning accuracy and knowledge forgetting. We also show that the proposed method is relatively stable and robust compared to the existing SP-based methods. Furthermore, the IoR generated by our model interestingly appears to be proactively controlled within a specific range, which resembles a negative feedback mechanism of homeostasis in synapses.