Reputation: 51
Suppose I have trained a neural network that takes some inputs and accurately calculates their value. This neural network was used to approximate a function that was hard to solve analytically or to simulate using other methods. It is a very accurate function approximator. Now I would like to know what the best inputs are that will return the highest value. I was thinking that I could do this with a genetic algorithm but is there a neural network method of doing this? Also is it possible to train the neural network and find the optimal inputs simultaneously? What kind of network architecture could do this?
Upvotes: 3
Views: 667
Reputation: 3294
Well, a direct solution would be to apply calculus to each of the layers and solve for any local minimums or maximums (assuming that you don't have that many variables). But I don't think this solution (or similar optimization methods) would be a proper use of neural networks.
Neural networks are designed to copy-cat. Give input X and expected output Y, optimize a function that guesses "close" to Y. This is the nature of a neural network. An isolated optimization problem asks a fundamentally different question. Given a body of data that approximates some underlying function, find the single "best" solution. Such problem is looking for a single case (or isolated discrete cases) amoungst a collection of data.
If you want to phrase an optimization problem in terms of a neural network solution it would look like this. Given a collection of approximated functions (millions of trained neural networks) and known optimized solutions (the expected solutions for each one), train a new neural network that mimics this behavior. This can certainly be done, but the collection of functions of interest would need some kind of bounds; it would certainly not be possible to train a single neural network that applies "universally" to all possible optimization problems. That would solve the entire field of optimization theory.
For example, given a collection of functions of the form Asin(Bx+C)+D for a random distribution of A, B, C, and D; find the maximum. Or count the number of maximum and minimum. These are great examples of something a neural network could learn to do, on unseen functions from the dataset. The neural network, might even learn the underlying behavior so well that it works with coefficients outside the initial dataset too.
Of course, one could start building a massive collection of optimization neural networks that applies in millions of different cases for all kinds of different problems. Such "neural network zoo" could solve all of optimization theory.
Upvotes: 1