Reputation: 51
Consider a Machine Learning Algorithm which train from a training set, with the help of PAC learning model we get bounds on training sample size needed so the probability that error is limited(by epsilon) is bounded(by delta).
What does PAC learning model say about computational(time) complexity. Suppose a Learning Algorithm is given more time(like more iterations) how the error and probability that error is limited changes
As an learning algorithm which takes one hour to train is of no practical use in financial prediction problems. I need how the performance changes as time given to algorithm changes both in terms of error bounds and what is the probability that error is bounded
Upvotes: 0
Views: 406
Reputation: 4436
The PAC model simply tells you how many pieces of data you need to get a certain level of error with some probability. This can be translated into the impact on the run time by looking at the actual machine learning algorithm your using.
For example, if your algorithm runs in time O(2^n), and the PAC model says you need 1000 examples to have a 95% chance of having .05 error and 10,000 example for .005 error, then you know you should expect a HUGE slowdown for the increased accuracy. Whereas the same PAC information for a O(log n) algorithm would probably lead you to go ahead and get the lower error.
On a side note, it sounds like you might be confused about how most supervised learning algorithms work:
Suppose a Learning Algorithm is given more time(like more iterations) how the error and probability that error is limited changes
In most cases you can't really just give the same algorithm more time and expect better results, unless you chance the parameters (e.g. learning rate) or increase the number of examples. Perhaps by 'iterations' you meant examples, in which case the impact of the number of examples on the probably and error rate can be found by manipulating the system of equations used for the PAC learning model; see the wiki article.
Upvotes: 4