Home > Research > Publications & Outputs > Unsupervised Domain Adaptation within Deep Foun...

Electronic data

  • unsupervised_domain_adaptation

    Accepted author manuscript, 2.4 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

View graph of relations

Unsupervised Domain Adaptation within Deep Foundation Latent Spaces

Research output: Contribution to conference - Without ISBN/ISSN Conference paperpeer-review

Published

Standard

Unsupervised Domain Adaptation within Deep Foundation Latent Spaces. / Kangin, Dmitry; Angelov, Plamen.
2024. Paper presented at ICLR 2024 2nd Workshop on Mathematical and Empirical Understanding of Foundation Models (ME-FoMo), Vienna, Austria.

Research output: Contribution to conference - Without ISBN/ISSN Conference paperpeer-review

Harvard

Kangin, D & Angelov, P 2024, 'Unsupervised Domain Adaptation within Deep Foundation Latent Spaces', Paper presented at ICLR 2024 2nd Workshop on Mathematical and Empirical Understanding of Foundation Models (ME-FoMo), Vienna, Austria, 7/05/24 - 11/05/24.

APA

Kangin, D., & Angelov, P. (2024). Unsupervised Domain Adaptation within Deep Foundation Latent Spaces. Paper presented at ICLR 2024 2nd Workshop on Mathematical and Empirical Understanding of Foundation Models (ME-FoMo), Vienna, Austria.

Vancouver

Kangin D, Angelov P. Unsupervised Domain Adaptation within Deep Foundation Latent Spaces. 2024. Paper presented at ICLR 2024 2nd Workshop on Mathematical and Empirical Understanding of Foundation Models (ME-FoMo), Vienna, Austria.

Author

Kangin, Dmitry ; Angelov, Plamen. / Unsupervised Domain Adaptation within Deep Foundation Latent Spaces. Paper presented at ICLR 2024 2nd Workshop on Mathematical and Empirical Understanding of Foundation Models (ME-FoMo), Vienna, Austria.

Bibtex

@conference{f5eb7b2fc04d43b9a5a5e31feed65aaa,
title = "Unsupervised Domain Adaptation within Deep Foundation Latent Spaces",
abstract = "The vision transformer-based foundation models, such as ViT or Dino-V2, are aimed at solving problems with little or no finetuning of features. Using a setting of prototypical networks, we analyse to what extent such foundation models can solve unsupervised domain adaptation without finetuning over the source or target domain. Through quantitative analysis, as well as qualitative interpretations of decision making, we demonstrate that the suggested method can improve upon existing baselines, as well as showcase the limitations of such approach yet to be solved. The code is available at: https://github.com/lira-centre/vit_uda/",
author = "Dmitry Kangin and Plamen Angelov",
year = "2024",
month = may,
day = "11",
language = "English",
note = "ICLR 2024 2nd Workshop on Mathematical and Empirical Understanding of Foundation Models (ME-FoMo) ; Conference date: 07-05-2024 Through 11-05-2024",

}

RIS

TY - CONF

T1 - Unsupervised Domain Adaptation within Deep Foundation Latent Spaces

AU - Kangin, Dmitry

AU - Angelov, Plamen

PY - 2024/5/11

Y1 - 2024/5/11

N2 - The vision transformer-based foundation models, such as ViT or Dino-V2, are aimed at solving problems with little or no finetuning of features. Using a setting of prototypical networks, we analyse to what extent such foundation models can solve unsupervised domain adaptation without finetuning over the source or target domain. Through quantitative analysis, as well as qualitative interpretations of decision making, we demonstrate that the suggested method can improve upon existing baselines, as well as showcase the limitations of such approach yet to be solved. The code is available at: https://github.com/lira-centre/vit_uda/

AB - The vision transformer-based foundation models, such as ViT or Dino-V2, are aimed at solving problems with little or no finetuning of features. Using a setting of prototypical networks, we analyse to what extent such foundation models can solve unsupervised domain adaptation without finetuning over the source or target domain. Through quantitative analysis, as well as qualitative interpretations of decision making, we demonstrate that the suggested method can improve upon existing baselines, as well as showcase the limitations of such approach yet to be solved. The code is available at: https://github.com/lira-centre/vit_uda/

M3 - Conference paper

T2 - ICLR 2024 2nd Workshop on Mathematical and Empirical Understanding of Foundation Models (ME-FoMo)

Y2 - 7 May 2024 through 11 May 2024

ER -