Undefined
Undefined

Reputation: 1929

How to evolve weights of a neural network using a GA?

I'm working on an assignment and I need to evolve the weights of my neural network. My neural network is work but I'm unsure of how to evolve my network in a way that will get me good results.

I know my AI teacher said I need to use a sigmoid function and add up my weights*inputs but I'm not exactly sure on the rest.

Thanks.

Edit: I need to use a GA to train the weights. Sorry I didn't make it clear.

Upvotes: 2

Views: 3972

Answers (3)

Romaine Carter
Romaine Carter

Reputation: 655

The fitness function of your GA should be able to optimize your NN's weights for example solving the logical AND problem with a single layer perceptron requires a function such as this:

fitness = 1 - (input1*weight1 + input2*weight2)

The closer your fitness gets to 0 the better, with an optimal solution of (input1*0.5 + input2*0.5)

if we replace input1 and input2 with values such as and weights of 0.5 on each neuron

input1 = 1, input2 = 1 => fitness = 0

input1 = 0, input2 = 1 => fitness = 0.5

input1 = 1, input2 = 0 => fitness = 0.5

input1 = 0, input2 = 0 => fitness = 1

These generated weights could then be transfered into the indexed weight for each neuron. Essentially you would not be creating many neural nets, but many combinations of weights for a NN and using the GA to optimize them.

Upvotes: 0

alfa
alfa

Reputation: 3098

There are numerous ways to evolve neural networks. You can evolve topologies, weights or both (this is done especially in reinforcement learning domains, see EANT or NEAT).

You said you should evolve the weights of your network. Generally you can apply any optimization algorithm for this. But there are different categories of problems and optimization algorithms. In supervised learning it usually makes sense to calculate an error on your training set and the gradient of the error function with respect to the weights. Optimization algorithms that use gradient information are usually faster than genetic algorithms (e. g. Backprop, Quickprop, RProp, Conjugate Gradient, Levenberg-Marquardt...).

As you said, you don't have a training set and thus you don't have an error function so you cannot calculate a gradient. Well, what you need to evolve the weights of your neural networks is some kind of fitness function. If you don't have any fitness function, you will not be able to improve anything by adjusting your weights. So, basically you have a function F(w), where w is your continuous weight vector you have to optimize with respect to F. Your algorithm should do something like this:

  1. initialize neural network
  2. generate N weight vectors
  3. calculate fitness values of weight vectors
  4. repeat 2.-4. until some stopping criterion is satisfied

From your description I guess that you probably have to solve some kind of reinforcement learning problem. In this case you could e. g. take the accumulated reward of an episode as a fitness value. If you are interested in this topic: there is some recent research about applying genetic algorithms on neural networks to solve reinforcement learning problems (this is called neuroevolution). Usually people use genetic algorithms like CMA-ES (CMA-NeuroES) or CoSyNE.

I hope I could help.

Upvotes: 0

Novak
Novak

Reputation: 4779

There are any number of ways to do this, and generally one is not (for homework) just told to go make it happen without being given an algorithm to implement.

One of the common methods taught in an AI or neural networks class is backpropagation:

http://en.wikipedia.org/wiki/Backpropagation

UPDATE: Oh, I see. Now I can at least point you in the proper direction. The discussion is a bit long to provide in the answer space on stackoverflow, but the basic idea is to generate a bunch of random neural networks to (very badly!) solve your problem, then apply genetic algorithms to the networks (i.e., convert the neural networks to chromosomes that can be mutated, crossed-over/recombined, etc, according to their fitness) and let the whole system bootstrap itself out of the primordial ooze. So to speak.

There is a very good paper about one particular application (chess) written by Fogel, et al, here: http://www.aics-research.com/ieee-chess-fogel.pdf

Upvotes: 2

Related Questions