smatthewenglish
smatthewenglish

Reputation: 2889

making sense of confusing perceptron input data

I have the following data:

 0 0 0 0 0  0 0
 0 0 0 0 1  0 0
 0 0 0 1 0  0 0
 0 0 0 1 1  0 0
 0 0 1 0 0  0 0
 0 0 1 0 1  0 0
 0 0 1 1 0  0 0
 0 0 1 1 1  1 0
 0 1 0 0 0  0 0
 0 1 0 0 1  0 1
 0 1 0 1 0  0 0
 0 1 0 1 1  1 1
 0 1 1 0 0  0 0
 0 1 1 0 1  1 1
 0 1 1 1 0  1 0
 0 1 1 1 1  1 1
 1 0 0 0 0  0 0
 1 0 0 0 1  0 0
 1 0 0 1 0  0 0
 1 0 0 1 1  1 0
 1 0 1 0 0  0 0
 1 0 1 0 1  1 0
 1 0 1 1 0  1 0
 1 0 1 1 1  1 0
 1 1 0 0 0  0 0
 1 1 0 0 1  1 1
 1 1 0 1 0  1 0
 1 1 0 1 1  1 1
 1 1 1 0 0  1 0
 1 1 1 0 1  1 1
 1 1 1 1 0  1 0
 1 1 1 1 1  1 1

And I'm meant to use this as input for a perceptron, viz:

Implement a 2-layer Perzeptron (one input-layer, one output-layer).

The Perzeptron shall have an N-dimensional binary input X, an M-dimensional binary output Y, and a BIAS-weight for implementing the threshold.

(N shall be less than 101, and M less than 30), initialize all weights randomly between −0.5 ≤ wn,m ≤ +0.5

Implement further the possibilities to train the Perzeptron using the perzeptron learning rule with patterns ( pX, pY ) that have been read in from a file named PA-A-train.dat (P shall be less than 200), and a possibility to read in the weights wn,m from a file.

The thing is- I don't understand that data- it looks like the numbers following the space are supposed to be a label, but- if so, why are there two? Shouldn't the label be only one?

Maybe someone can help me make sense of this.

Upvotes: 0

Views: 60

Answers (1)

lejlot
lejlot

Reputation: 66795

Neural network can have arbitrary number of output neurons. In particular, when one has no hidden layer, training M-output perceptron is equivalent to training M binary perceptrons. So your data is quite easy - you have M=2 output variables, each is the expected value on a particular output neuron.

Upvotes: 1

Related Questions