Home > Research > Publications & Outputs > Precise Facial Landmark Detection by Reference ...

Links

Text available via DOI:

View graph of relations

Precise Facial Landmark Detection by Reference Heatmap Transformer

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Jun Wan
  • Jun Liu
  • Jie Zhou
  • Zhihui Lai
  • Linlin Shen
  • Hang Sun
  • Ping Xiong
  • Wenwen Min
Close
<mark>Journal publication date</mark>31/12/2023
<mark>Journal</mark>IEEE Transactions on Image Processing
Volume32
Number of pages12
Pages (from-to)1966-1977
Publication StatusPublished
Early online date29/03/23
<mark>Original language</mark>English

Abstract

Most facial landmark detection methods predict landmarks by mapping the input facial appearance features to landmark heatmaps and have achieved promising results. However, when the face image is suffering from large poses, heavy occlusions and complicated illuminations, they cannot learn discriminative feature representations and effective facial shape constraints, nor can they accurately predict the value of each element in the landmark heatmap, limiting their detection accuracy. To address this problem, we propose a novel Reference Heatmap Transformer (RHT) by introducing reference heatmap information for more precise facial landmark detection. The proposed RHT consists of a Soft Transformation Module (STM) and a Hard Transformation Module (HTM), which can cooperate with each other to encourage the accurate transformation of the reference heatmap information and facial shape constraints. Then, a Multi-Scale Feature Fusion Module (MSFFM) is proposed to fuse the transformed heatmap features and the semantic features learned from the original face images to enhance feature representations for producing more accurate target heatmaps. To the best of our knowledge, this is the first study to explore how to enhance facial landmark detection by transforming the reference heatmap information. The experimental results from challenging benchmark datasets demonstrate that our proposed method outperforms the state-of-the-art methods in the literature.