Publication Type

Journal Article

Version

submittedVersion

Publication Date

4-2023

Abstract

Open-Set Domain Adaptation (OSDA) aims to adapt the model trained on a source domain to the recognition tasks in a target domain while shielding any distractions caused by open-set classes, i.e., the classes “unknown” to the source model. Compared to standard DA, the key of OSDA lies in the separation between known and unknown classes. Existing OSDA methods often fail the separation because of overlooking the confounders (i.e., the domain gaps), which means their recognition of “unknown classes” is not because of class semantics but domain difference (e.g., styles and contexts). We address this issue by explicitly deconfounding domain gaps (DDP) during class separation and domain adaptation in OSDA. The mechanism of DDP is to transfer domain-related styles and contexts from the target domain to the source domain. It enables the model to recognize a class as known (or unknown) because of the class semantics rather than the confusion caused by spurious styles or contexts. In addition, we propose a module of ensembling multiple transformations (EMT) to produce calibrated recognition scores, i.e., reliable normality scores, for the samples in the target domain. Extensive experiments on two standard benchmarks verify that our proposed method outperforms a wide range of OSDA methods, because of its advanced ability of correctly recognizing unknown classes.

Keywords

Open-set domain adaptation, Image classification

Discipline

Artificial Intelligence and Robotics | Databases and Information Systems | Graphics and Human Computer Interfaces

Research Areas

Data Science and Engineering

Publication

Applied Intelligence

Volume

53

First Page

7862

Last Page

7875

ISSN

0924-669X

Identifier

10.1007/s10489-022-03805-9

Publisher

Springer

Copyright Owner and License

Authors

Additional URL

https://doi.org/10.1007/s10489-022-03805-9

Share

COinS