Explicitly Mapping of observations from a general set S into an inner product space

I am learning "Kenel Tricks" for SVM. When I was searching I had to read the passage from Wiki as follows:

 "For machine learning algorithms, the kernel trick is a way of mapping observations 
from a general set S into an inner product space V (equipped with its natural norm), 
without ever having to compute the mapping explicitly, in the hope that the 
observations will gain meaningful linear structure in V"

My Question from above passage is:

  1. What is meant by "compute the mapping explicitly"?

Can any one please define it with some real time example or give me some referential web sites. So it will help in understanding kernels.

Upvotes: 0

Views: 85

Answers (1)

Qnan
Qnan

Reputation: 3744

The answer is right there in the same article:

The trick to avoid the explicit mapping is to use learning algorithms that only require dot products between the vectors in V, and choose the mapping such that these high-dimensional dot products can be computed within the original space, by means of a kernel function.

That means that one can avoid computing the images of the data points in the [multidimensional] kernel space and instead only calculate the pairwise dot product of these images, which often turns out to be cheaper. There's an example here, as well as in pretty nearly every book on SVM's.

Upvotes: 1

Related Questions