A real-time, multimodal, and dimensional affect recognition system
College
College of Computer Studies
Department/Unit
Computer Technology
Document Type
Conference Proceeding
Source Title
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume
7458 LNAI
First Page
241
Last Page
249
Publication Date
10-25-2012
Abstract
This study focuses on the development of a real-time automatic affect recognition system. It adapts a multimodal approach, where affect information taken from two modalities are combined to arrive at an emotion label that is represented in a valence-arousal space. The SEMAINE Database was used to build the affect model. Prosodic and spectral features were used to predict affect from the voice. Temporal templates called Motion History Images (MHI) were used to predict affect from the face. Prediction results from the face and voice models were combined using decision-level fusion. Using support vector machine for regression (SVR), the system was able to correctly identify affect label with a root mean square error (RMSE) of 0.2899 for arousal, and 0.2889 for valence. © 2012 Springer-Verlag.
html
Digitial Object Identifier (DOI)
10.1007/978-3-642-32695-0_23
Recommended Citation
Lee, N., Cu, J., & Suarez, M. (2012). A real-time, multimodal, and dimensional affect recognition system. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 7458 LNAI, 241-249. https://doi.org/10.1007/978-3-642-32695-0_23
Upload File
wf_yes