Misplaced Pages

Domain adaptation

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Distributional shift) Field associated with machine learning and transfer learning
This article may be too technical for most readers to understand. Please help improve it to make it understandable to non-experts, without removing the technical details. (February 2015) (Learn how and when to remove this message)
Distinction between usual machine learning setting and transfer learning, and positioning of domain adaptation

Domain adaptation is a field associated with machine learning and transfer learning. It addresses the challenge of training a model on one data distribution (the source domain) and applying it to a related but different data distribution (the target domain).

A common example is spam filtering, where a model trained on emails from one user (source domain) is adapted to handle emails for another user with significantly different patterns (target domain).

Domain adaptation techniques can also leverage unrelated data sources to improve learning. When multiple source distributions are involved, the problem extends to multi-source domain adaptation.

Domain adaptation is a specialized area within transfer learning. In domain adaptation, the source and target domains share the same feature space but differ in their data distributions. In contrast, transfer learning encompasses broader scenarios, including cases where the target domain’s feature space differs from that of the source domain(s).

Classification of domain adaptation problems

Domain adaptation setups are classified in two different ways; according to the distribution shift between the domains, and according to the available data from the target domain.

Distribution shifts

Common distribution shifts are classified as follows:

  • Covariate Shift occurs when the input distributions of the source and destination change, but the relationship between inputs and labels remains unchanged. The above-mentioned spam filtering example typically falls in this category. Namely, the distributions (patterns) of emails may differ between the domains, but emails labeled as spam in the one domain should similarly be labeled in another.
  • Prior Shift (Label Shift) occurs when the label distribution differs between the source and target datasets, while the conditional distribution of features given labels remains the same.  An example is a classifier of hair color in images from Italy (source domain) and Norway (target domain).  The proportions of hair colors (labels) differ, but images within classes like blond and black-haired populations remain consistent across domains. A classifier for the Norway population can exploit this prior knowledge of class proportions to improve its estimates.
  • Concept Shift (Conditional Shift) refers to changes in the relationship between features and labels, even if the input distribution remains the same. For instance, in medical diagnosis, the same symptoms (inputs) may indicate entirely different diseases (labels) in different populations (domains).

Data available during training

A second classification is according to the available data during training:

  • Unsupervised: the learning sample contains a set of labeled source examples, a set of unlabeled source examples and a set of unlabeled target examples.
  • Semi-supervised: in this situation, we also consider a "small" set of labeled target examples.
  • Supervised: all the examples considered are supposed to be labeled.

Formalization

Let X {\displaystyle X} be the input space (or description space) and let Y {\displaystyle Y} be the output space (or label space). The objective of a machine learning algorithm is to learn a mathematical model (a hypothesis) h : X Y {\displaystyle h:X\to Y} able to attach a label from Y {\displaystyle Y} to an example from X {\displaystyle X} . This model is learned from a learning sample S = { ( x i , y i ) ( X × Y ) } i = 1 m {\displaystyle S=\{(x_{i},y_{i})\in (X\times Y)\}_{i=1}^{m}} .

Usually in supervised learning (without domain adaptation), we suppose that the examples ( x i , y i ) S {\displaystyle (x_{i},y_{i})\in S} are drawn i.i.d. from a distribution D S {\displaystyle D_{S}} of support X × Y {\displaystyle X\times Y} (unknown and fixed). The objective is then to learn h {\displaystyle h} (from S {\displaystyle S} ) such that it commits the least error possible for labelling new examples coming from the distribution D S {\displaystyle D_{S}} .

The main difference between supervised learning and domain adaptation is that in the latter situation we study two different (but related) distributions D S {\displaystyle D_{S}} and D T {\displaystyle D_{T}} on X × Y {\displaystyle X\times Y} . The domain adaptation task then consists of the transfer of knowledge from the source domain D S {\displaystyle D_{S}} to the target one D T {\displaystyle D_{T}} . The goal is then to learn h {\displaystyle h} (from labeled or unlabelled samples coming from the two domains) such that it commits as little error as possible on the target domain D T {\displaystyle D_{T}} .

The major issue is the following: if a model is learned from a source domain, what is its capacity to correctly label data coming from the target domain?

Four algorithmic principles

Reweighting algorithms

The objective is to reweight the source labeled sample such that it "looks like" the target sample (in terms of the error measure considered).

Iterative algorithms

A method for adapting consists in iteratively "auto-labeling" the target examples. The principle is simple:

  1. a model h {\displaystyle h} is learned from the labeled examples;
  2. h {\displaystyle h} automatically labels some target examples;
  3. a new model is learned from the new labeled examples.

Note that there exist other iterative approaches, but they usually need target labeled examples.

Search of a common representation space

The goal is to find or construct a common representation space for the two domains. The objective is to obtain a space in which the domains are close to each other while keeping good performances on the source labeling task. This can be achieved through the use of Adversarial machine learning techniques where feature representations from samples in different domains are encouraged to be indistinguishable.

Hierarchical Bayesian Model

The goal is to construct a Bayesian hierarchical model p ( n ) {\displaystyle p(n)} , which is essentially a factorization model for counts n {\displaystyle n} , to derive domain-dependent latent representations allowing both domain-specific and globally shared latent factors.

Softwares

Several compilations of domain adaptation and transfer learning algorithms have been implemented over the past decades:

  • ADAPT (Python)
  • TLlib (Python)
  • Domain-Adaptation-Toolbox (MATLAB)

References

  1. Crammer, Koby; Kearns, Michael; Wortman, Jeniifer (2008). "Learning from Multiple Sources" (PDF). Journal of Machine Learning Research. 9: 1757–1774.
  2. Sun, Shiliang; Shi, Honglei; Wu, Yuanbin (July 2015). "A survey of multi-source domain adaptation". Information Fusion. 24: 84–92. doi:10.1016/j.inffus.2014.12.003. S2CID 18385140.
  3. Kouw, Wouter M.; Loog, Marco (2019-01-14), An introduction to domain adaptation and transfer learning, doi:10.48550/arXiv.1812.11806, retrieved 2024-12-22
  4. Huang, Jiayuan; Smola, Alexander J.; Gretton, Arthur; Borgwardt, Karster M.; Schölkopf, Bernhard (2006). "Correcting Sample Selection Bias by Unlabeled Data" (PDF). Conference on Neural Information Processing Systems (NIPS). pp. 601–608.
  5. Shimodaira, Hidetoshi (2000). "Improving predictive inference under covariate shift by weighting the log-likelihood function". Journal of Statistical Planning and Inference. 90 (2): 227–244. doi:10.1016/S0378-3758(00)00115-4. S2CID 9238949.
  6. Gallego, A.J.; Calvo-Zaragoza, J.; Fisher, R.B. (2020). "Incremental Unsupervised Domain-Adversarial Training of Neural Networks" (PDF). IEEE Transactions on Neural Networks and Learning Systems. PP (11): 4864–4878. doi:10.1109/TNNLS.2020.3025954. hdl:20.500.11820/72ba0443-8a7d-4cdd-8212-38682d4f0730. PMID 33027004. S2CID 210164756.
  7. Arief-Ang, I.B.; Salim, F.D.; Hamilton, M. (2017-11-08). DA-HOC: semi-supervised domain adaptation for room occupancy prediction using CO2 sensor data. 4th ACM International Conference on Systems for Energy-Efficient Built Environments (BuildSys). Delft, Netherlands. pp. 1–10. doi:10.1145/3137133.3137146. ISBN 978-1-4503-5544-5.
  8. Arief-Ang, I.B.; Hamilton, M.; Salim, F.D. (2018-12-01). "A Scalable Room Occupancy Prediction with Transferable Time Series Decomposition of CO2 Sensor Data". ACM Transactions on Sensor Networks. 14 (3–4): 21:1–21:28. doi:10.1145/3217214. S2CID 54066723.
  9. Ganin, Yaroslav; Ustinova, Evgeniya; Ajakan, Hana; Germain, Pascal; Larochelle, Hugo; Laviolette, François; Marchand, Mario; Lempitsky, Victor (2016). "Domain-Adversarial Training of Neural Networks" (PDF). Journal of Machine Learning Research. 17: 1–35.
  10. Hajiramezanali, Ehsan; Siamak Zamani Dadaneh; Karbalayghareh, Alireza; Zhou, Mingyuan; Qian, Xiaoning (2017). "Addressing Appearance Change in Outdoor Robotics with Adversarial Domain Adaptation". arXiv:1703.01461 .
  11. Hajiramezanali, Ehsan; Siamak Zamani Dadaneh; Karbalayghareh, Alireza; Zhou, Mingyuan; Qian, Xiaoning (2018). "Bayesian multi-domain learning for cancer subtype discovery from next-generation sequencing count data". arXiv:1810.09433 .
  12. de Mathelin, Antoine and Deheeger, François and Richard, Guillaume and Mougeot, Mathilde and Vayatis, Nicolas (2020) "ADAPT: Awesome Domain Adaptation Python Toolbox"
  13. Mingsheng Long Junguang Jiang, Bo Fu. (2020) "Transfer-learning-library"
  14. Ke Yan. (2016) "Domain adaptation toolbox"
Category: