Reputation: 1
I am new to Neural Computation and can understand the back propagation concept. My question is, can you train an MLP without back-propagation to fit a function? Say, I need to fit a sine function. How can I do it without using back-propagation to allocate weights?
Upvotes: 0
Views: 1403
Reputation: 1
There is a best option to train a neural network, it's particle swarm optimization, because pso gives global best value & its space complexity is best compared to other algorithms. There are many research papers, you can search & compare it with BP algorithm.
Upvotes: 0
Reputation: 3249
The idea of training neural networks using backpropagation is specially interesting because it allows you to update intermediate weights without having a direct output from these weights. So it is an useful idea! Most times people combine the backpropagation algorithm with the gradient descent algorithm. However, the gradient descent algorithm is sometimes slow, but you can replace gradient descent algorithm (that uses only information of the derivative of the error) by another "clever" algorithm such as levenberg-marquardt or extended kalman filter.... There are lots of them. In these cases, you are still using the backpropagation algorithm plus another optimization algorithm.
Sometimes, the problem of convergence of the neural network is not due to the bad quality of the optimization algorithm, but due to the initialization, "starting weights". There is a giant literature that can help you how to "cleverly" initialize weights of the neural network.
As you asked and @Atilla_Ozgur answered suitably, there are other algorithms that you can use to deal with that. For instance, you may create a set of neural networks and try to use a kind of genetic algorithm to choose the best network using operations such as mutation and reproduction.
Let me say you something. The sin function is particularly an interesting case and sometimes it takes a while to converge. However, it can be trained by the combination backpropagation+gradient descent. I did this before a long time ago. You have to be sure that you have enough neurons in the hidden layer (usually it is necessary to have 5 neurons with activation function tanh if you are training the NN in the interval [0,2pi]).
Upvotes: 0
Reputation: 14721
can you train an MLP without back-propagation to fit a function?
Yes. Back-propagation is an optimization algorithm to find weights of neurons. You can use any number of different algorithms to find these weight thus train your neural network.
Examples include
Upvotes: 2
Reputation: 1671
I don't believe their is a common way to train MLPs without back-propagation (wikipedia), it's just a direct-gradient approach applied to the weights. There are modifications which use e.g. a momentum term, or which train at different points.
However, there are many other machine learning algorithms, which use different cost functions or architectures, such as Particle Swarm Optimisation and Evolutionary optimisation
Upvotes: 0