Browse by author
Lookup NU author(s): Dr Bin Gao, Dr Wai Lok Woo
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
Developing audio processing tools for extracting social-audio features are just as important as conscious content for determining human behavior. Psychologists speculate these features may have evolved as a way to establish hierarchy and group cohesion because they function as a subconscious discussion about relationships, resources, risks, and rewards. In this paper, we present the design, implementation, and deployment of a wearable computing platform capable of automatically extracting and analyzing social-audio signals. Unlike conventional research that concentrates on data which have been recorded under constrained conditions, our data were recorded in completely natural and unpredictable situations. In particular, we benchmarked a set of integrated algorithms (sound speech detection and classification, sound level meter calculation, voice and nonvoice segmentation, speaker segmentation, and prediction) to obtain speech and environmental sound social-audio signals using an in-house built wearable device. In addition, we derive a novel method that incorporates the recently published audio feature extraction technique based on power normalized cepstral coefficient and gap statistics for speaker segmentation and prediction. The performance of the proposed integrated platform is robust to natural and unpredictable situations. Experiments show that the method has successfully segmented natural speech with 89.6% accuracy.
Author(s): Gao B, Woo WL
Publication type: Article
Publication status: Published
Journal: IEEE Transactions on Human-Machine Systems
Year: 2014
Volume: 44
Issue: 2
Pages: 222-233
Print publication date: 01/04/2014
ISSN (print): 2168-2291
ISSN (electronic): 2168-2305
Publisher: IEEE
URL: http://dx.doi.org/10.1109/THMS.2014.2300698
DOI: 10.1109/THMS.2014.2300698
Altmetrics provided by Altmetric