poorly_built_human
poorly_built_human

Reputation: 65

Training and Validating Correctly With Encog

I think I'm doing something wrong with Encog. In all of the examples I've seen, they simply TRAIN until a certain training error is reached and then print the results. When is the gradient calculated and the weights of the hidden layers updated? Is this all contained within the training.iteration() function? This makes no sense because even though my TRAINING error keeps decreasing in my program, which seems to imply that the weights are changing, I have not yet run a validation set through the network (which I broke off and separated from the training set when building the data at the beginning) in order to determine if the validation error is still decreasing with the training error.

I have also loaded the validation set into a trainer and ran it through the network with compute() but the validation error is always similar to the training error - so it's hard to tell if its the same error from training. Meanwhile, the testing hit rate is less than 50% (expected if not learning).

I know there are a lot of different types of backpropogation techniques, particularly the common one using gradient descent as well as resilient backpropogation. What part of the network are we expected to update manually ourselves?

Upvotes: 2

Views: 1510

Answers (1)

JeffHeaton
JeffHeaton

Reputation: 3278

In Encog, weights are updated during the Train.iteration method call. This includes all weights. If you are using a gradient descent type trainer (i.e. backprop, rprop, quickprop) then your neural network is updated at the end of each iteration call. If you are using a population based trainer (i.e. genetic algorithm, etc) then you must call finishTraining so that the best population member can be copied back to the actual neural network that you passed to the trainer's constructor. Actually, its always a good idea to call finishTraining after your iterations. Some trainers need it, others do not.

Another thing to keep in mind is that some trainers report the current error at the beginning of the call to iteration, others at the of the iteration(improved error). This is for efficiency to keep some of the trainers from having to iterate over the data twice.

Keeping a validation set to test your training is a good idea. A few methods that might be helpful to you:

BasicNetwork.dumpWeights - Displays the weights for your neural network. This allows you to see if they have changed. BasicNetwork.calculateError - Pass a training set to this and it will give you the error.

Upvotes: 3

Related Questions