Home > Research > Publications & Outputs > Muti-view Mouse Social Behaviour Recognition wi...

Electronic data

  • Muti-view Mouse Social Behaviour Recognition with Deep Graphic Model

    Rights statement: ©2021 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 3.74 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Muti-view Mouse Social Behaviour Recognition with Deep Graphic Model

Research output: Contribution to journalJournal articlepeer-review

Published
  • Z. Jiang
  • F. Zhou
  • A. Zhao
  • X. Li
  • L. Li
  • D. Tao
  • H. Zhou
Close
Article number5490-5504
<mark>Journal publication date</mark>28/05/2021
<mark>Journal</mark>IEEE Transactions on Image Processing
Volume30
Number of pages15
Publication StatusPublished
<mark>Original language</mark>English

Abstract

Home-cage social behaviour analysis of mice is an invaluable tool to assess therapeutic efficacy of neurodegenerative diseases. Despite tremendous efforts made within the research community, single-camera video recordings are mainly used for such analysis. Because of the potential to create rich descriptions for mouse social behaviors, the use of multi-view video recordings for rodent observations is increasingly receiving much attention. However, identifying social behaviours from various views is still challenging due to the lack of correspondence across data sources. To address this problem, we here propose a novel multi-view latent-attention and dynamic discriminative model that jointly learns view-specific and view-shared sub-structures, where the former captures unique dynamics of each view whilst the latter encodes the interaction between the views. Furthermore, a novel multi-view latent-attention variational autoencoder model is introduced in learning the acquired features, enabling us to learn discriminative features in each view. Experimental results on the standard CRMI13 and our multi-view Parkinson’s Disease Mouse Behaviour (PDMB) datasets demonstrate that our proposed model outperforms the other state of the arts technologies, has lower computational cost than the other graphical models and effectively deals with the imbalanced data problem. IEEE

Bibliographic note

©2021 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.