Home > Research > Publications & Outputs > Diffusion-based Image Translation with Label Gu...

Links

Text available via DOI:

View graph of relations

Diffusion-based Image Translation with Label Guidance for Domain Adaptive Semantic Segmentation

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Close
Publication date15/01/2024
Host publication2023 IEEE/CVF International Conference on Computer Vision (ICCV)
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages808-820
Number of pages13
ISBN (electronic)9798350307184
<mark>Original language</mark>English
Event2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023 - Paris, France
Duration: 2/10/20236/10/2023

Conference

Conference2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023
Country/TerritoryFrance
CityParis
Period2/10/236/10/23

Publication series

NameProceedings of the IEEE International Conference on Computer Vision
ISSN (Print)1550-5499

Conference

Conference2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023
Country/TerritoryFrance
CityParis
Period2/10/236/10/23

Abstract

Translating images from a source domain to a target domain for learning target models is one of the most common strategies in domain adaptive semantic segmentation (DASS). However, existing methods still struggle to preserve semantically-consistent local details between the original and translated images. In this work, we present an innovative approach that addresses this challenge by using sourcedomain labels as explicit guidance during image translation. Concretely, we formulate cross-domain image translation as a denoising diffusion process and utilize a novel Semantic Gradient Guidance (SGG) method to constrain the translation process, conditioning it on the pixel-wise source labels. Additionally, a Progressive Translation Learning (PTL) strategy is devised to enable the SGG method to work reliably across domains with large gaps. Extensive experiments demonstrate the superiority of our approach over state-of-the-art methods.

Bibliographic note

Publisher Copyright: © 2023 IEEE.