Recognizing Filipino sign language video sequences using deep learning techniques

Date of Publication

4-2023

Document Type

Master's Thesis

Degree Name

Master of Engineering major in Electronics and Communications Engineering

College

Gokongwei College of Engineering

Department/Unit

Electronics And Communications Engg

Thesis Advisor

Melvin K. Cabatuan

Defense Panel Chair

Jose Martin Maningo

Defense Panel Member

Edwin Sybingco
Michael Manguerra

Abstract/Summary

The use of machine intelligence for sign language is a popular method, especially in the field of sign language recognition. A model that performs well on SLR can bridge the gap between the deaf and the hearing. Furthermore, it can be applied to a great number of applications such as sign language education. The overarching issue in this field is the chronic lack of high-quality and large-volume datasets. This holds back SLR research, especially on less studied sign languages such as FSL. In addition, the inclusion of real-world applications is rare in FSL research. With this in mind, the study focused on creating a high-quality FSL dataset, building an accurate model, as well as developing an application to deal with the current gaps in FSL research. A dataset of more than 2000 video clips with 105 different signs was made. The signs were carefully chosen by an expert to make sure that these are useful introductory signs. Each frame of the video contains joint locations which were extracted by MediaPipe. Graph convolutional networks and Gated Recurrent Networks were then used to classify the signs. Using the model, an FSL E-learning desktop app was then developed. The top model, MediaPipe-GRU, was able to achieve a 100% top-5 accuracy on the dataset.

Abstract Format

html

Language

English

Keywords

Philippine Sign Language—Data processing; Sign language—Data processing; Artificial intelligence

Upload Full Text

wf_yes

Embargo Period

12-20-2022

This document is currently not available here.

Share

COinS