APPLICATION OF AN ARCHITECTURALLY DYNAMIC NETWORK FOR SPEECH PATTERN CLASSIFICATION
Visakan Kadirkamanathan and Mahesan Niranjan
October 1992
We have previously adopted a function estimation approach to the problem of sequential learning with neural networks and derived a network that grows with increasing observations. This network is a single hidden layer Gaussian radial basis function (GaRBF) network with a single output unit. On receiving new observation, the network adds a new hidden unit or adapts the existing parameters by the Kalman filter.
In this paper, we extend the network to have multiple output units. By choosing to adapt only the linear coefficients (hidden - output layer weights), considerable memory savings can be achieved for the resulting Kalman filter than if the parameters of the basis functions were also adapted. Results for this network are presented for the Peterson-Barney vowel classification problem in which the observations are presented sequentially and only once. The performance is comparable to that achieved by some of the standard block estimation methods. This approach can also be viewed as an alternate method of arriving at an approximate complexity of the network required to solve a given problem, eliminating the need for an a priori selection of network size.
If you have difficulty viewing files that end '.gz'
,
which are gzip compressed, then you may be able to find
tools to uncompress them at the gzip
web site.
If you have difficulty viewing files that are in PostScript, (ending
'.ps'
or '.ps.gz'
), then you may be able to
find tools to view them at
the gsview
web site.
We have attempted to provide automatically generated PDF copies of documents for which only PostScript versions have previously been available. These are clearly marked in the database - due to the nature of the automatic conversion process, they are likely to be badly aliased when viewed at default resolution on screen by acroread.