Date of Publication


Document Type


Degree Name

Doctor of Philosophy in Electronics and Communications Engineering

Subject Categories

Electrical and Computer Engineering


Gokongwei College of Engineering


Electronics and Communications Engineering

Thesis Adviser

Lawrence Y. Materum

Defense Panel Chair

Argel A. Bandala

Defense Panel Member

Aaron Don M. Africa
Gerino P. Mappatao
Celso B. Co
Jennifer C. Dela Cruz


Neural networks and clustering are two of the many machine learning algorithms used for artificial intelligence. The conventional neural network is made up of numerous fully connected layers of neutrons. On the other hand, Convolutional Neural Networks (CNN) have become a better alternative to the conventional neural network due its ability to provide better guarantee for training success. In designing a hardware model for the CNN, emphasis is not only focused hardware requirement for the size and number of processing layers but also to the those needed by the weight values. In this research, a hardware model design for a CNN architecture is presented. The hardware model is capable of training by itself without the aid of any external or co processor. A hardware model design for the K-means clustering algorithm is also presented. The K-means clustering model is intended to compress the weights of the CNN in order to save hardware requirement for implementation. The CNN model and the K-means clustering model are then integrated to develop a CNN architecture that can perform weight compression by itself after training. The two hardware models are synthesized and implemented using a XILINX Virtex 5 library. Small scale CNN for pattern recognition shows the CNN can still recognize the input patterns at a compression rate of up to 80%. Another small scale CNN for selected digit image recognition shows 100% recognition of trained inputs up to 60% compression. The integration, when synthesized using the Virtex 5 library consumes 29,163 slice registers, 28,896 flip flops and 55,645 look up tables.

Abstract Format






Accession Number



Neural networks (Computer science); Field programmable gate arrays

Upload Full Text


Embargo Period