Home > Research > Publications & Outputs > Hand-Based Person Identification using Global a...

Electronic data

  • Baisa22Multi-final

    Rights statement: ©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 940 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Hand-Based Person Identification using Global and Part-Aware Deep Feature Representation Learning

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date2/06/2022
Host publication2022 11th International Conference on Image Processing Theory, Tools and Applications, IPTA 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1-6
Number of pages6
ISBN (electronic)9781665469647
<mark>Original language</mark>English
Event11th International Conference on Image Processing Theory, Tools and Applications, IPTA 2022 - Salzburg, Austria
Duration: 19/04/202222/04/2022

Conference

Conference11th International Conference on Image Processing Theory, Tools and Applications, IPTA 2022
Country/TerritoryAustria
CitySalzburg
Period19/04/2222/04/22

Publication series

Name2022 11th International Conference on Image Processing Theory, Tools and Applications, IPTA 2022

Conference

Conference11th International Conference on Image Processing Theory, Tools and Applications, IPTA 2022
Country/TerritoryAustria
CitySalzburg
Period19/04/2222/04/22

Abstract

In cases of serious crime, including sexual abuse, often the only available information with demonstrated potential for identification is images of the hands. Since this evidence is captured in uncontrolled situations, it is difficult to analyse. As global approaches to feature comparison are limited in this case, it is important to extend to consider local information. In this work, we propose hand-based person identification by learning both global and local deep feature representations. Our proposed method, Global and Part-Aware Network (GPA-Net), creates global and local branches on the conv-layer for learning robust discriminative global and part-level features. For learning the local (part-level) features, we perform uniform partitioning on the conv-layer in both horizontal and vertical directions. We retrieve the parts by conducting a soft partition without explicitly partitioning the images or requiring external cues such as pose estimation. We make extensive evaluations on two large multi-ethnic and publicly available hand datasets, demonstrating that our proposed method significantly outperforms competing approaches.

Bibliographic note

©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.