Home > Research > Publications & Outputs > Unsupervised Domain Adaptation within Deep Foun...

Electronic data

  • unsupervised_domain_adaptation

    Accepted author manuscript, 2.4 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

View graph of relations

Unsupervised Domain Adaptation within Deep Foundation Latent Spaces

Research output: Contribution to conference - Without ISBN/ISSN Conference paperpeer-review

Published
Publication date11/05/2024
<mark>Original language</mark>English
EventICLR 2024 2nd Workshop on Mathematical and Empirical Understanding of Foundation Models (ME-FoMo) - Vienna, Austria
Duration: 7/05/202411/05/2024

Workshop

WorkshopICLR 2024 2nd Workshop on Mathematical and Empirical Understanding of Foundation Models (ME-FoMo)
Country/TerritoryAustria
CityVienna
Period7/05/2411/05/24

Abstract

The vision transformer-based foundation models, such as ViT or Dino-V2, are aimed at solving problems with little or no finetuning of features. Using a setting of prototypical networks, we analyse to what extent such foundation models can solve unsupervised domain adaptation without finetuning over the source or target domain. Through quantitative analysis, as well as qualitative interpretations of decision making, we demonstrate that the suggested method can improve upon existing baselines, as well as showcase the limitations of such approach yet to be solved. The code is available at: https://github.com/lira-centre/vit_uda/