Building a multimodal laughter database for emotion recognition
College
College of Computer Studies
Department/Unit
Software Technology
Document Type
Conference Proceeding
Source Title
Proceedings of the 8th International Conference on Language Resources and Evaluation, LREC 2012
First Page
2347
Last Page
2350
Publication Date
1-1-2012
Abstract
Laughter is a significant paralinguistic cue that is largely ignored in multimodal affect analysis. In this work, we investigate how a multimodal laughter corpus can be constructed and annotated both with discrete and dimensional labels of emotions for acted and spontaneous laughter. Professional actors enacted emotions to produce acted clips, while spontaneous laughter was collected from volunteers. Experts annotated acted laughter clips, while volunteers who possess an acceptable empathic quotient score annotated spontaneous laughter clips. The data was pre-processed to remove noise from the environment, and then manually segmented starting from the onset of the expression until its offset. Our findings indicate that laughter carries distinct emotions, and that emotion in laughter is best recognized using audio information rather than facial information. This may be explained by emotion regulation, i.e. laughter is used to suppress or regulate certain emotions. Furthermore, contextual information plays a crucial role in understanding the kind of laughter and emotion in the enactment.
html
Recommended Citation
Suarez, M., Cu, J., & Maria, M. (2012). Building a multimodal laughter database for emotion recognition. Proceedings of the 8th International Conference on Language Resources and Evaluation, LREC 2012, 2347-2350. Retrieved from https://animorepository.dlsu.edu.ph/faculty_research/1093