Automatic multimodal human identification for a self-improving, ambient intelligent empathic space (HuMaNRecog)
Date of Publication
2010
Document Type
Bachelor's Thesis
Degree Name
Bachelor of Science in Computer Science
College
College of Computer Studies
Department/Unit
Computer Science
Honor/Award
Awarded as best thesis, 2010
Thesis Adviser
Merlin Teodosia C. Suarez
Defense Panel Chair
Ethel Chua Joy Ong
Defense Panel Member
Paul Salvador Inventado
Jocelynn W. Cu
Abstract/Summary
This paper investigates the problem of human identification in order to aid the self-improving and ambient intelligent empathic space in providing a tailor fitted space for its occupant. This is particularly relevant to the emphatic space since it should be capable of automatically recognizing its occupant, to allow it in turn to retrieve its (the occupants) affective and behavior model. While the use of a face recognition and voice recognition in solving these problems are well-studied, they prove brittle when applied to real-world scenarios, primarily because these are unimodal. This research subscribes to the use of multimodal human identification with constraints as required by the emphatic space. This paper presents a novel framework to solve the problem of human identification by extending the use of unimodal biometrics to multimodal biometric information making use of a persons face, voice, and gait for purposes of recognition.
Evaluations were conducted based on the corpus built with 15 registered occupants. The accuracy results of the system for face, voice, and gait modalities acting independently are 81.33%, and 74.02% respectively. For fused modalities, the system yielded an overall accuracy rate of 86.67%. Though the gait performance is quite lo, it is still a necessary component of the system since facial and vocal information may be unavailable in certain situations (i.e., the person enters the space with his head bent down, or enters the room with no auditory information captured) – In which case, the process of identification may still proceed given the gait information gathered from the occupant which is always present.
Abstract Format
html
Language
English
Format
Accession Number
TU18511
Shelf Location
Archives, The Learning Commons, 12F Henry Sy Sr. Hall
Physical Description
1v. various foliations : illustrations (some colored) ; 28 cm.
Recommended Citation
Cheung, O., Chuacokiong, M., Go, M., & Lee, N. (2010). Automatic multimodal human identification for a self-improving, ambient intelligent empathic space (HuMaNRecog). Retrieved from https://animorepository.dlsu.edu.ph/etd_honors/371