Home > Research > Publications & Outputs > A Multi-view CNN-based Acoustic Classification ...

Links

Text available via DOI:

View graph of relations

A Multi-view CNN-based Acoustic Classification System for Automatic Animal Species Identification

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

A Multi-view CNN-based Acoustic Classification System for Automatic Animal Species Identification. / Xu, Weitao; Zhang, Xiang; Yao, Lina et al.
In: Ad Hoc Networks, Vol. 102, 102115, 01.05.2020.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Xu W, Zhang X, Yao L, Xue W, Wei B. A Multi-view CNN-based Acoustic Classification System for Automatic Animal Species Identification. Ad Hoc Networks. 2020 May 1;102:102115. Epub 2020 Mar 8. doi: 10.1016/j.adhoc.2020.102115

Author

Xu, Weitao ; Zhang, Xiang ; Yao, Lina et al. / A Multi-view CNN-based Acoustic Classification System for Automatic Animal Species Identification. In: Ad Hoc Networks. 2020 ; Vol. 102.

Bibtex

@article{c75f8618b6474d18b58683d6991a8eb8,
title = "A Multi-view CNN-based Acoustic Classification System for Automatic Animal Species Identification",
abstract = "Automatic identification of animal species by their vocalization is an important and challenging task. Although many kinds of audio monitoring system have been proposed in the literature, they suffer fromseveral disadvantages such as non-trivial feature selection, accuracy degradation because of environmental noise or intensive local computation. In this paper, we propose a deep learning based acoustic classification framework for Wireless Acoustic Sensor Network (WASN). The proposed framework is based oncloud architecture which relaxes the computational burden on the wireless sensor node. To improve therecognition accuracy, we design a multi-view Convolution Neural Network (CNN) to extract the short-, middle-, and long-term dependencies in parallel. The evaluation on two real datasets shows that theproposed architecture can achieve high accuracy and outperforms traditional classification systems significantly when the environmental noise dominate the audio signal (low SNR). Moreover, we implementand deploy the proposed system on a testbed and analyse the system performance in real-world environments. Both simulation and real-world evaluation demonstrate the accuracy and robustness of theproposed acoustic classification system in distinguishing species of animals.",
keywords = "Wireless acoustic sensor network, Animal identification, Deep learning, CNN",
author = "Weitao Xu and Xiang Zhang and Lina Yao and Wanli Xue and Bo Wei",
year = "2020",
month = may,
day = "1",
doi = "10.1016/j.adhoc.2020.102115",
language = "English",
volume = "102",
journal = "Ad Hoc Networks",
issn = "1570-8705",
publisher = "Elsevier",

}

RIS

TY - JOUR

T1 - A Multi-view CNN-based Acoustic Classification System for Automatic Animal Species Identification

AU - Xu, Weitao

AU - Zhang, Xiang

AU - Yao, Lina

AU - Xue, Wanli

AU - Wei, Bo

PY - 2020/5/1

Y1 - 2020/5/1

N2 - Automatic identification of animal species by their vocalization is an important and challenging task. Although many kinds of audio monitoring system have been proposed in the literature, they suffer fromseveral disadvantages such as non-trivial feature selection, accuracy degradation because of environmental noise or intensive local computation. In this paper, we propose a deep learning based acoustic classification framework for Wireless Acoustic Sensor Network (WASN). The proposed framework is based oncloud architecture which relaxes the computational burden on the wireless sensor node. To improve therecognition accuracy, we design a multi-view Convolution Neural Network (CNN) to extract the short-, middle-, and long-term dependencies in parallel. The evaluation on two real datasets shows that theproposed architecture can achieve high accuracy and outperforms traditional classification systems significantly when the environmental noise dominate the audio signal (low SNR). Moreover, we implementand deploy the proposed system on a testbed and analyse the system performance in real-world environments. Both simulation and real-world evaluation demonstrate the accuracy and robustness of theproposed acoustic classification system in distinguishing species of animals.

AB - Automatic identification of animal species by their vocalization is an important and challenging task. Although many kinds of audio monitoring system have been proposed in the literature, they suffer fromseveral disadvantages such as non-trivial feature selection, accuracy degradation because of environmental noise or intensive local computation. In this paper, we propose a deep learning based acoustic classification framework for Wireless Acoustic Sensor Network (WASN). The proposed framework is based oncloud architecture which relaxes the computational burden on the wireless sensor node. To improve therecognition accuracy, we design a multi-view Convolution Neural Network (CNN) to extract the short-, middle-, and long-term dependencies in parallel. The evaluation on two real datasets shows that theproposed architecture can achieve high accuracy and outperforms traditional classification systems significantly when the environmental noise dominate the audio signal (low SNR). Moreover, we implementand deploy the proposed system on a testbed and analyse the system performance in real-world environments. Both simulation and real-world evaluation demonstrate the accuracy and robustness of theproposed acoustic classification system in distinguishing species of animals.

KW - Wireless acoustic sensor network

KW - Animal identification

KW - Deep learning

KW - CNN

U2 - 10.1016/j.adhoc.2020.102115

DO - 10.1016/j.adhoc.2020.102115

M3 - Journal article

VL - 102

JO - Ad Hoc Networks

JF - Ad Hoc Networks

SN - 1570-8705

M1 - 102115

ER -