Abstract
Deep learning frameworks allowed for a remarkable advancement in semantic segmentation, but the data hungry nature of convolutional networks has rapidly raised the demand for adaptation techniques able to transfer learned knowledge from label-abundant domains to unlabeled ones. In this paper we propose an effective Unsupervised Domain Adaptation (UDA) strategy, based on a feature clustering method that captures the different semantic modes of the feature distribution and groups features of the same class into tight and well-separated clusters. Furthermore, we introduce two novel learning objectives to enhance the discriminative clustering performance: an orthogonality loss forces spaced out individual representations to be orthogonal, while a sparsity loss reduces class-wise the number of active feature channels. The joint effect of these modules is to regularize the structure of the feature space. Extensive evaluations in the synthetic-to-real scenario show that we achieve state-of-the-art performance.
Code
The code can be found here.
Method
The method is illustrated in Figure 1.
Results
The main quantitative and qualitative results are reported in the following.
Contacts
For any information on the method you can contact
lttm@dei.unipd.it
References
[1] M. Toldo, U. Michieli, P. Zanuttigh, "Unsupervised Domain Adaptation in Semantic Segmentation via Orthogonal and Clustered Embeddings", Accepted for publication in Winter Conference on Applications of Computer Vision (WACV), 2021
xhtml/css website layout by Ben Goldman - http://realalibi.com