Browse by author
Lookup NU author(s): Dr Bo WeiORCiD
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
© 2020 Elsevier B.V.Automatic identification of animal species by their vocalization is an important and challenging task. Although many kinds of audio monitoring system have been proposed in the literature, they suffer from several disadvantages such as non-trivial feature selection, accuracy degradation because of environmental noise or intensive local computation. In this paper, we propose a deep learning based acoustic classification framework for Wireless Acoustic Sensor Network (WASN). The proposed framework is based on cloud architecture which relaxes the computational burden on the wireless sensor node. To improve the recognition accuracy, we design a multi-view Convolution Neural Network (CNN) to extract the short-, middle-, and long-term dependencies in parallel. The evaluation on two real datasets shows that the proposed architecture can achieve high accuracy and outperforms traditional classification systems significantly when the environmental noise dominate the audio signal (low SNR). Moreover, we implement and deploy the proposed system on a testbed and analyse the system performance in real-world environments. Both simulation and real-world evaluation demonstrate the accuracy and robustness of the proposed acoustic classification system in distinguishing species of animals.
Author(s): Xu W, Zhang X, Yao L, Xue W, Wei B
Publication type: Article
Publication status: Published
Journal: Ad Hoc Networks
Year: 2020
Volume: 102
Print publication date: 01/05/2020
Online publication date: 22/02/2020
Acceptance date: 21/02/2020
ISSN (print): 1570-8705
ISSN (electronic): 1570-8713
Publisher: Elsevier B.V.
URL: https://doi.org/10.1016/j.adhoc.2020.102115
DOI: 10.1016/j.adhoc.2020.102115
Altmetrics provided by Altmetric