Reputation: 167
Can someone explain the concept of VC dimension and PAC Learning with an example? I have seen some content online but couldn't actually relate to a good example.
Upvotes: 0
Views: 1563
Reputation: 1667
They are different concepts that relate to each other. I will try to explain both terms and show their relation concisely:
PAC learning is a theoretical framework developed by Leslie Valiant in 1984 that seeks to bring ideas of Complexity Theory to learning problems. While in Complexity Theory you want to classify decision problems by bounds on the amount of computation they take (number of steps), in the PAC model you want to classify concept classes (tasks) by computational bounds and bounds on the number of samples required, given some tolerance to error, epsilon, and a confidence level, 1-delta. PAC learning offers guarantees to the absolute error, how different your hypothesis (the learned function) is from the concept (the target function, your task), given that you can only measure your empirical error, the one that you get from your training sample.
VC dimension (named after Vapnik and Chervonenkis) is a number that represents the complexity of a specific hypothesis class (learning algorithm). For instance, the perceptron hypothesis class is simpler than the multilayer perceptron; therefore, it has a smaller VC dimension.
In several PAC theorems, you will have the measure of the size of the hypothesis class, a measure of its capacity or complexity, as one of the terms. The VC dimension can be used as a number that represents this size.
I recommend taking a look at Mohri's "Foundations of ML" for an excellent example of PAC learning (the first chapter has a good one).
P.S. I guess this question is not very much in the scope of StackOverflow. For questions of this type, you may have a better response in CrossValidated.
Upvotes: 2