A neural network model for the case representation of sentences
Date of Publication
Master of Science in Computer Science
College of Computer Studies
Reynaldo M. Villafuerte
Defense Panel Chair
Defense Panel Member
Arturo Kang Mun Tan
This study deals with the design of a neural network model that assigns thematic roles to nouns and verbs that comprise the English sentence presented at its input buffer using neural computing principles and theories of natural language processing. These roles form the case representation of the sentence. The model also addresses lexical disambiguation, syntactic and semantic aspects of sentence comprehension. These aspects become inherent properties of the neural model.
The neural model is represented by a network of processing elements. Information is stored in vectors and matrices. Mathematical operations on these matrices and a learning algorithm make learning and recall possible in the network.
A comparative study of neural network models available in the literature is done to determine which of these networks are suited to the specified application. Several criteria are set by the author and these are: learning mode, possible values that can be assigned to the input and output, type of input and output pattern, propagation, learning and activation rules used, limitations of these rules, the number of processing elements, information processing and retrieval schemes, interconnection between PEs and the possible values used for the connectivity (weight) matrices.
From the design specifications, several neural network models are selected and a comparison is done by simulating a prototype of the designed neural model using the selected networks. During simulation, another set of criteria is defined to compare the performance of the neural network models and these are: learning ability, case representation of previously learned sentences, generalization and lexical disambiguation.
Comparison results showed that all the selected neural network models (Adaline, Backpropagation and Perceptron networks) have the capability to assign the correct case representations of sentences presented at their input buffers, whether previously learned or not encountered at all, and resolve lexical disambiguation but with varying degree of accuracy.
Simulation results also showed that the backpropagation network (BP) performed better than the other network models in lexical disambiguation and generalization. From these results, the Back Propagation is the model suggested for the case representation of sentences, hence for Natural Language Processing.
Archives, The Learning Commons, 12F Henry Sy Sr. Hall
152 p., 28 cm.
Generalao, F. B. (1990). A neural network model for the case representation of sentences. Retrieved from https://animorepository.dlsu.edu.ph/etd_masteral/1269