Facial Expression Tracking and mimicking system (FacET)

Date of Publication

2008

Document Type

Bachelor's Thesis

Degree Name

Bachelor of Science in Computer Science

College

College of Computer Studies

Department/Unit

Computer Science

Thesis Adviser

Jocelynn W. Cu

Abstract/Summary

Computer vision development is rapidly gaining in momentum over the years because of its invaluable applications in the field of robotics, animation, and human computer interaction (HCI). The ability to efficiently track the facial expression of a person sitting behind the camera has become both a significant research goal in making animation look more realistic, giving a robotic head human-like expressiveness and in studying facial muscle movement. Another potential use of this technology is in low-bandwidth videoconferencing and chatting applications.

The Facial Expression Tracking and Mimicking System (FacET) was developed to further the research into this area. The FacET System is designed to track the facial features of a person such as the eyes, eyebrows, and mouth recognize the facial expressions conveyed by the user and animate these expressions using a 3D avatar. The system uses a combination of optical flow and intensity information to track certain points in the face. Principal Component Analysis (PCA) is then used to classify the points into 6 basic facial expressions. The system can accurately classify 5 out of 6 expressions namely: happy, sad, fear, anger, and surprise while disgust obtained poor recognition results. Other facial expression classifications tend to overlap with that of disgust, thus producing poor results.

Abstract Format

html

Language

English

Format

Print

Accession Number

TU18025

Shelf Location

Archives, The Learning Commons, 12F, Henry Sy Sr. Hall

Physical Description

1 v. (various foliations) : illustrations (some colored) ; 28 cm.

This document is currently not available here.

Share

COinS