Audiovisual laughter segmentation

Date of Publication

2011

Document Type

Bachelor's Thesis

Degree Name

Bachelor of Science in Computer Science

Subject Categories

Computer Sciences

College

College of Computer Studies

Department/Unit

Computer Science

Thesis Adviser

Merlin Suarez

Defense Panel Member

Arnulfo Azcarraga

Gregory Cu

Abstract/Summary

Non-linguistic signals, specifically, laughter offers a lot of information such as cues on the emotional state of a person and topic changes in meetings. The numerous benefits of laughter, ranging from identifying activities to improving speech-to-text accuracy, have gained the interests of many researchers. Laughter detection is an important area of interest in the Affective Computing and Human-computer Interaction fields because laughter is a highly variable signal, and can express a spectrum of emotions. This makes the automatic detection of laughter a challenging but interesting task. Laughter segmentation using only visual cues disregards the use of the audio parts like pitch formats and others. This paper presents a prototype that would automatically segment laughter segments from videos of meetings. Model-based segmentation approach is the one used as segmentation algorithm. SVM trained on visual features (such as facial points, shoulder points, head points and head angle) to classify instances. The classifier achieved its best accuracy at 86%. The prototype is able to accurately segment laughter on videos however, errors are encountered.

Abstract Format

html

Language

English

Format

Print

Accession Number

TU18524

Shelf Location

Archives, The Learning Commons, 12F, Henry Sy Sr. Hall

Physical Description

1v. various foliations : illustrations (some colored) ; 28 cm.

Keywords

Laughter; Nonverbal communication; Image processing--Digital techniques

This document is currently not available here.

Share

COinS