Modeling dyadic and group impressions with intermodal and interperson features

College

College of Computer Studies

Department/Unit

Computer Science

Document Type

Article

Source Title

ACM Transactions on Multimedia Computing, Communications and Applications

Volume

15

Issue

1s

Publication Date

1-1-2019

Abstract

This article proposes a novel feature-extraction framework for inferring impression personality traits, emergent leadership skills, communicative competence, and hiring decisions. The proposed framework extracts multimodal features, describing each participant's nonverbal activities. It captures intermodal and interperson relationships in interactions and captures how the target interactor generates nonverbal behavior when other interactors also generate nonverbal behavior. The intermodal and interperson patterns are identified as frequent co-occurring events based on clustering from multimodal sequences. The proposed framework is applied to the SONVB corpus, which is an audiovisual dataset collected from dyadic job interviews, and the ELEA audiovisual data corpus, which is a dataset collected from group meetings. We evaluate the framework on a binary classification task involving 15 impression variables from the two data corpora. The experimental results show that the model trained with co-occurrence features is more accurate than previous models for 14 out of 15 traits. © 2019 Association for Computing Machinery.

html

Digitial Object Identifier (DOI)

10.1145/3265754

Disciplines

Computer Sciences

Keywords

Multimodal user interfaces (Computer systems); Personality

Upload File

wf_yes

This document is currently not available here.

Share

COinS