Cheshie
Cheshie

Reputation: 2837

How to get training accuracy in svmlight with cross validation

I want to run cross validation on my training set using SVMlight. It seems that the option for this is -x 1 (although I'm not sure how many folds it implements...). The output is:

XiAlpha-estimate of the error: error<=31.76% (rho=1.00,depth=0)
XiAlpha-estimate of the recall: recall=>68.24% (rho=1.00,depth=0)
XiAlpha-estimate of the precision: precision=>69.02% (rho=1.00,depth=0)
Number of kernel evaluations: 56733
Computing leave-one-out **lots of gibberish here**
Retrain on full problem..............done.
Leave-one-out estimate of the error: error=12.46%
Leave-one-out estimate of the recall: recall=86.39%
Leave-one-out estimate of the precision: precision=88.82%
Actual leave-one-outs computed:  412 (rho=1.00)
Runtime for leave-one-out in cpu-seconds: 0.84

How can I get the accuracy? From the estimate of the error?

Thank you!

Upvotes: 0

Views: 1067

Answers (1)

lejlot
lejlot

Reputation: 66805

These are contradicting concepts. Training error is the error on the training set, while cross validation is used to approximate the validation error (on the data not used for training).

Your output suggests that you are using N-folds (where N-size of the training set) which leads to so called "leave one out" validation (only 1 testing point!) which is overestimating your model's quality. You should try 10-folds, and your accuracy is simply 1-error.

Upvotes: 3

Related Questions