damith219
damith219

Reputation: 137

Why only hyperplanes in support-vector machines?

I only recently learnt about support vector machines. From what I understood, hyperplanes are used to separate the data (raised to the higher dimension) into two mutually exclusive parts (partitions). My question is why should it be a hyperplane and not a surface with a curvature(s)? Wouldn't that provide a better suited separating 'surface'?

Upvotes: 2

Views: 367

Answers (3)

Has QUIT--Anony-Mousse
Has QUIT--Anony-Mousse

Reputation: 77474

That the hyperplane does not really exist. It is not computed. It is not really treated as a hyperplane either.

The SVM decision is based on the similarity to the support vectors; which implicitly defines a hyperplane in some Euclidean space. When using a kernel function, this does not need to be your original data space; and it may well be non-linear in the original data space, if you are using a non-linear kernel function for computing your similarities.

Upvotes: 2

asimes
asimes

Reputation: 5894

There exists something called a Kernel Function: http://en.wikipedia.org/wiki/Kernel_trick

From what I remember from a Data Mining class, you perform a non-linear transformation of each point into a higher dimension. Say your data is in just two dimensions and is not linearly separable. If you convert each (x, y) into (x, y, z) (or more dimensions if necessary, probably likely) using a Kernel Function then you may be able to separate the data with a plane / hyperplane / higher dimensional hyperplane without involving curvature.

As an example of a Kernel Function, apply this to "generate" a higher dimension, math is similar to binomial expansion: f(x, y) == (x*x, y*y, 2xy)

Upvotes: 2

user41871
user41871

Reputation:

Nonlinear classification is possible.

https://en.wikipedia.org/wiki/Support_vector_machine#Nonlinear_classification

Upvotes: 2

Related Questions