user940198
user940198

Reputation:

Matlab Multilayer Perceptron Question

I need to classify a dataset using Matlab MLP and show classification.

The dataset looks like

Click to view

What I have done so far is:

  1. I have create an neural network contains a hidden layer (two neurons ?? maybe someone could give me some suggestions on how many neurons are suitable for my example) and a output layer (one neuron).

  2. I have used several different learning methods such as Delta bar Delta, backpropagation (both of these methods are used with or -out momentum and Levenberg-Marquardt.)

This is the code I used in Matlab(Levenberg-Marquardt example)

net = newff(minmax(Input),[2 1],{'logsig' 'logsig'},'trainlm');
net.trainParam.epochs = 10000;
net.trainParam.goal = 0;
net.trainParam.lr = 0.1;
[net tr outputs] = train(net,Input,Target);

The following shows hidden neuron classification boundaries generated by Matlab on the data, I am little bit confused, beacause network should produce nonlinear result, but the result below seems that two boundary lines are linear..

Click to view

The code for generating above plot is:

figure(1)
plotpv(Input,Target);
hold on
plotpc(net.IW{1},net.b{1});
hold off

I also need to plot the output function of the output neuron, but I am stucking on this step. Can anyone give me some suggestions?

Thanks in advance.

Upvotes: 2

Views: 4471

Answers (1)

skd
skd

Reputation: 1967

Regarding the number of neurons in the hidden layer, for such an small example two are more than enough. The only way to know for sure the optimum is to test with different numbers. In this faq you can find a rule of thumb that may be useful: http://www.faqs.org/faqs/ai-faq/neural-nets/

For the output function, it is often useful to divide it in two steps:

First, given the input vector x, the output of the neurons in the hidden layer is y = f(x) = x^T w + b where w is the weight matrix from the input neurons to the hidden layer and b is the bias vector.

Second, you will have to apply the activation function g of the network to the resulting vector of the previous step z = g(y)

Finally, the output is the dot product h(z) = z . v + n, where v is the weight vector from the hidden layer to the output neuron and n the bias. In the case of more than one output neurons, you will repeat this for each one.

I've never used the matlab mlp functions, so I don't know how to get the weights in this case, but I'm sure the network stores them somewhere. Edit: Searching the documentation I found the properties:

  • net.IW numLayers-by-numInputs cell array of input weight values
  • net.LW numLayers-by-numLayers cell array of layer weight values
  • net.b numLayers-by-1 cell array of bias values

Upvotes: 2

Related Questions