American Sign Language (ASL) to voice translation system using the blackfin microprocessor
Date of Publication
2010
Document Type
Bachelor's Thesis
Degree Name
Bachelor of Science in Electronics and Communications Engineering
College
Gokongwei College of Engineering
Department/Unit
Electronics and Communications Engineering
Thesis Adviser
Cesar A. Llorente
Defense Panel Chair
Gerino P. Mappatao
Defense Panel Member
Lawrence Y. Materum
Roberto T. Caguinguin
Abstract/Summary
This paper discusses an approach for capturing and translating American Sign Language into voice using the Blackfin Microprocessor. The instrumentation parts of the system consist of the flexion sensors along the fingers, the wrist and the elbow. Also, accelerometers are positioned in the forearm near the wrist, and in the arm near the elbow. Gestures of the American Sign Language are broken down into phonemes of poses and movements. The poses defined by the study are composed of at least 26 handshapes, 9 signing space and 5 core palm orientations. Training was accomplished by doing several sets of the different states of the pose. Recognition rates of modularized states of the different components of the pose proved that the system is capable of recognizing the different states relatively well.
Abstract Format
html
Language
English
Format
Accession Number
TU15539
Shelf Location
Archives, The Learning Commons, 12F, Henry Sy Sr. Hall
Physical Description
159, [113] leaves : ill. ; 28 cm.
Keywords
American Sign Language--Translating
Recommended Citation
Co, E. L., Poticano, J. M., & Yao, B. C. (2010). American Sign Language (ASL) to voice translation system using the blackfin microprocessor. Retrieved from https://animorepository.dlsu.edu.ph/etd_bachelors/5884