Skip to main content

2017 | OriginalPaper | Buchkapitel

10. Domain-Adversarial Training of Neural Networks

verfasst von : Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, Victor Lempitsky

Erschienen in: Domain Adaptation in Computer Vision Applications

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

We introduce a representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behavior can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new Gradient Reversal Layer. The resulting augmented architecture can be trained using standard backpropagation, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for image classification, where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
As mentioned in [31], the same analysis holds for multi-class setting. However, to obtain the same results when \(|Y|>2\), one should assume that \({\mathcal {H}}\) is a symmetrical hypothesis class. That is, for all \(h\in {\mathcal {H}}\) and any permutation of labels \(c:Y\rightarrow Y\), we have \(c(h)\in {\mathcal {H}}\). Note that this is the case for most commonly used neural network architectures.
 
2
For brevity of notation, we will sometimes drop the dependence of \(G_f\) on its parameters \(({\mathbf W},{\mathbf b})\) and shorten \(G_f(\mathbf{x }; {\mathbf W}, {\mathbf b})\) to \(G_f(\mathbf{x })\).
 
3
To create the source sample S, we generate a lower moon and an upper moon labeled 0 and 1 respectively, each of which containing 150 examples. The target sample T is obtained by (1) generating a sample \(S'\) the same way S has been generated; (2) rotating each example by \(35^\circ \); and (3) removing all the labels. Thus, T contains 300 unlabeled examples.
 
4
A 2-layer domain classifier (\(x{\rightarrow }1024{\rightarrow }1024{\rightarrow }2\)) is attached to the 256-dimensional bottleneck of fc7.
 
5
Equivalently, one can use the same \(\lambda _p\) for both feature extractor and domain classification components, but use a learning rate of \(\mu /\lambda _p\) for the latter.
 
Metadaten
Titel
Domain-Adversarial Training of Neural Networks
verfasst von
Yaroslav Ganin
Evgeniya Ustinova
Hana Ajakan
Pascal Germain
Hugo Larochelle
François Laviolette
Mario Marchand
Victor Lempitsky
Copyright-Jahr
2017
DOI
https://doi.org/10.1007/978-3-319-58347-1_10

Premium Partner