Multimodal emotion recognition using a spontaneous Filipino emotion database
College
College of Computer Studies
Department/Unit
Computer Technology
Document Type
Conference Proceeding
Source Title
2010 3rd International Conference on Human-Centric Computing, HumanCom 2010
Publication Date
10-28-2010
Abstract
Human-computer interaction is moving towards giving computers the ability to adapt and give feedback in accordance to a user's emotion. Studies on emotion recognition show that combining face and voice signals produce higher recognition rates compared to using either one individually. In addition, majority of the emotion corpus used on these systems were modeled based on acted data with actors who tend to exaggerate emotions. This study focus on the development of a multimodal emotion recognition system that is trained using a spontaneous Filipino emotion database. The system extracts voice features and facial features that are then classified into the correct emotion label using support vector machines. Based on test results, recognizing emotions using voice only yielded 40% accuracy; using face only, 86%; and using a combination of voice and face yielded 80%. © 2010 IEEE.
html
Digitial Object Identifier (DOI)
10.1109/HUMANCOM.2010.5563314
Recommended Citation
Lanze, M., Dy, I., Vener, I., Espinosa, L., Patrick, P., Go, V., Martin, C., Mendez, M., & Cu, J. (2010). Multimodal emotion recognition using a spontaneous Filipino emotion database. 2010 3rd International Conference on Human-Centric Computing, HumanCom 2010 https://doi.org/10.1109/HUMANCOM.2010.5563314
Upload File
wf_yes