Tuesday, September 8, 2009

一番大切なものは? (Activity # 16)

Neural networks is the third classification method for pattern recognition explored in this course, and is the title of activity 16. Unlike the LDA, neural networks don't need heuristics and recognition rules for classification and instead makes use of learning, a function it tries to imitate from the neurons in our brain.

In any case, the activity makes use of the ANN toolbox (ANN_toolbox_0.4.2 found on Scilab's Toolboxes Center) which makes the activity more of a plug-and-play kind of activity.

So how do neural networks work? Each input connection to the artificial neuron has its own weight (synaptic strength) and is multiplied to the inputs, xi. A neuron receives weighted inputs from other neurons and lets the sum act on an activation function g. The result z is then fired as an output to other neurons.

A neural network is formed through the connection of neurons. A typical network consists of an input layer, a hidden layer and an output layer. Just like the previous activities, two modes were considered -- a training mode and a test mode. The parameters one can play around with for the functions in ANN would be the Neural Network architecture (how many neurons for each layer), the Learning Rate and the Number of Iterations.

Using the same data I used in Activity # 15

I had to do a few modifications. First, the inputs must be in a [2xN] matrix (done by transposing the bin after an fscanfMat() call), afterwhich the values must be normalized (ranging from 0 to 1). For the training set, I also had to change the classification values (1 and 2) to 1 and 0 (this is crucial to avoid having erroneous outputs). Finally, here's the output of my neural net classification after round():

For the win, I give myself a pat at the back and a well-deserved 10. xD I'd like to thank Gilbert for his help in helping me understand the ANN toolbox.

No comments:

Post a Comment