Home > Research > Publications & Outputs > Diff9D

Associated organisational unit

Electronic data

  • Diff9D

    Accepted author manuscript, 8.44 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

Diff9D: Diffusion-Based Domain-Generalized Category-Level 9-DoF Object Pose Estimation

Research output: Contribution to Journal/MagazineJournal articlepeer-review

E-pub ahead of print
  • Jian Liu
  • Wei Sun
  • Hui Yang
  • Pengchao Deng
  • Chongpei Liu
  • Nicu Sebe
  • Hossein Rahmani
  • Ajmal Mian
Close
<mark>Journal publication date</mark>18/03/2025
<mark>Journal</mark>IEEE Transactions on Pattern Analysis and Machine Intelligence
Number of pages17
Pages (from-to)1-17
Publication StatusE-pub ahead of print
Early online date18/03/25
<mark>Original language</mark>English

Abstract

Nine-degrees-of-freedom (9-DoF) object pose and size estimation is crucial for enabling augmented reality and robotic manipulation. Category-level methods have received extensive research attention due to their potential for generalization to intra-class unknown objects. However, these methods require manual collection and labeling of large-scale real-world training data. To address this problem, we introduce a diffusion-based paradigm for domain-generalized category-level 9-DoF object pose estimation. Our motivation is to leverage the latent generalization ability of the diffusion model to address the domain generalization challenge in object pose estimation. This entails training the model exclusively on rendered synthetic data to achieve generalization to real-world scenes. We propose an effective diffusion model to redefine 9-DoF object pose estimation from a generative perspective. Our model does not require any 3D shape priors during training or inference. By employing the Denoising Diffusion Implicit Model, we demonstrate that the reverse diffusion process can be executed in as few as 3 steps, achieving near real-time performance. Finally, we design a robotic grasping system comprising both hardware and software components. Through comprehensive experiments on two benchmark datasets and the real-world robotic system, we show that our method achieves state-of-the-art domain generalization performance. Our code will be made public at https://github.com/CNJianLiu/Diff9D.