Date of Publication

8-2025

Document Type

Bachelor's Thesis

Degree Name

Bachelor of Science in Mathematics with Specialization in Computer Applications

Subject Categories

Mathematics

College

College of Science

Department/Unit

Mathematics and Statistics Department

Thesis Advisor

Angelyn R. Lao

Defense Panel Chair

Rafael Reno S. Cantuba

Defense Panel Member

Jose Tristan F. Reyes

Abstract/Summary

Neural networks offer exceptional predictive power, but their high computational demands pose challenges for deployment in technologies with limited computing power. Researchers have recently proposed a form of regularization based on a combination of the l2 and l0 norms that increases the zero entries in the weight matrices which would result in simpler computations. In this study, we extended the application of the scheme to the Adaptive Moments Estimation optimization algorithm in order to create a more efficient algorithm, one that creates lightweight models with shorter training times in order to reap greater efficiency benefits. To test the effect of the modification to the algorithm, we trained models on a breast cancer malignancy dataset using both gradient descent and Adam optimizers, with and without the regularization terms. Our findings show that integrating the regularization scheme with Adam yielded sparser neural networks with faster training times compared to gradient descent while maintaining model performance.

Abstract Format

html

Language

English

Format

Electronic

Keywords

Neural networks (Computer science); Deep learning (Machine learning)

Upload Full Text

wf_yes

Embargo Period

8-7-2028

Available for download on Monday, August 07, 2028

Share

COinS