Abstract
Deep neural networks suffer from the major limitation of catastrophic forgetting old tasks when learning new ones. In this paper we focus on class incremental continual learning in semantic segmentation, where new categories are made available over time while previous training data is not retained. The proposed continual learning scheme shapes the latent space to reduce forgetting whilst improving the recognition of novel classes. Our framework is driven by three novel components which we also combine on top of existing techniques effortlessly. First, prototypes matching enforces latent space consistency on old classes, constraining the encoder to produce similar latent representation for previously seen classes in the subsequent steps. Second, features sparsification allows to make room in the latent space to accommodate novel classes. Finally, contrastive learning is employed to cluster features according to their semantics while tearing apart those of different classes. Extensive evaluation on the Pascal VOC2012 and ADE20K datasets demonstrates the effectiveness of our approach, significantly outperforming state-of-the-art methods.
The full paper can be downloaded from here The Supplementary Material can be downloaded from here
Code
The code for the training and the evaluation of the proposed method will be available on GitHub here.
Method
The overall architecture of the proposed approach is illustrated below.
Results
The main quantitative and qualitative results are reported in the following.
Contacts
For any information you can contact
lttm@dei.unipd.it
References
[1] U. Michieli and P. Zanuttigh, "Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations", International Conference on Computer Vision and Pattern Recognition (CVPR), 2021
xhtml/css website layout by Ben Goldman - http://realalibi.com