Long Smith
Long Smith

Reputation: 1401

PyBrain mnist classification

I'm trying to classificate mnist data with PyBrain.

Below is code for training:

def train_net(self):

    print("Build network")
    net = buildNetwork(784, 30, 10, bias=True, hiddenclass=TanhLayer, outclass=SoftmaxLayer)
    back_trainer = BackpropTrainer(net, learningrate=1)

    training_dataset = self.get_training_dataset()

    print("Begin training")
    time0 = time()
    err = back_trainer.trainUntilConvergence(dataset=training_dataset, maxEpochs=300, verbose=True)
    print("Training time is " + str(round(time()-time0, 3)) + " seconds.")

    return net, err

def get_training_dataset(self):
    print("Reading training images")
    features_train = self.read_images("train-images.idx3-ubyte")

    print("Reading training labels")
    labels_train = self.read_labels("train-labels.idx1-ubyte")

    # view_image(features_train[10])
    print("Begin reshaping training features")
    features_train = self.reshape_features(features_train)

    print("Create training dataset")
    training_dataset = ClassificationDataSet(784, 10)

    for i in range(len(features_train)):
        result = [0]*10
        result[labels_train[i]] = 1
        training_dataset.addSample(features_train[i], result)

    training_dataset._convertToOneOfMany()

    return training_dataset

And when I activate network on testing dataset the result looks like:

[  3.72885642e-25   4.62573440e-64   2.32150541e-31   9.42499004e-16
   1.33256639e-39   2.30439387e-17   5.16602624e-94   1.00000000e+00
   1.83860601e-27   1.22969684e-22]

Where argmax value indicates class. For given list argmax is 7.

But why? When I prepare datasets you can see result[labels_train[i]] = 1 where I require corresponding neuron to give me 1 and others must be zeros. So I expected [0, 0, 0, 0, 0, 0, 0, 1, 0, 0].

I've read that _convertToOneOfMany function can make output like that. So I added it but nothing has changed. What I do wrong?

Upvotes: 0

Views: 292

Answers (1)

BlackBear
BlackBear

Reputation: 22979

There is nothing wrong, you will almost never get back the exact results you trained for due to a variety of reasons, so you should be happy when the output is "sufficiently" close to the right answer (which is, in the example you posted).

Upvotes: 1

Related Questions