hubman
hubman

Reputation: 149

how make files training and testing in svm multi label?

how make files training and testing in svm multi label? my question is https://www.quora.com/Can-anyone-give-me-some-pointers-for-using-SVM-for-user-recognition-using-keystroke-timing/answer/Chomba-Bupe?snid3=364610243&nsrc=1&filter=all

my project is dynamic keyboard, a user vs all user for training For example if you have three classes A, B and C you will then have 3 SVMs each with its own parameters i.e weights and biases and 3 separate outputs corresponding to the 3 classes respectively. When training SVM-A the other two classes B and C act as negative training sets while A as positive, then when training SVM-B A and C are negative training sets and for SVM-C A and B are the negatives. This is the so called one vs all training procedure.

I try but the result goes wrong

my file to training is .csv and contains:

65 134,+1

70 98,+1

73 69,+1

82 122,+1

82 95,+1

83 127,+1

84 7,+1

85 64,+1

65 123,-1

71 115,-1

73 154,-1

73 156,-1

77 164,-1

77 144,-1

79 112,-1

83 91,-1

and my file to testing is .csv and contents is:

65 111

68 88

70 103

73 89

82 111

82 79

83 112

84 36

85 71

my code is

    'use strict';

var so = require('stringify-object');
var Q = require('q');
var svm = require('../lib');
var trainingFile = './archivos/training/340.txt';
var testingFile = './archivos/present/340.txt';

var clf = new svm.CSVC({
    gamma: 0.25,
    c: 1, // allow you to evaluate several values during training
    normalize: false,
    reduce: false,
    kFold: 1 // disable k-fold cross-validation
});

Q.all([
    svm.read(trainingFile),
    svm.read(testingFile)
]).spread(function (trainingSet, testingSet) {
    return clf.train(trainingSet)
        .progress(function(progress){
            console.log('training progress: %d%', Math.round(progress*100));
        })
        .then(function () {
            return clf.evaluate(testingSet);
        });
}).done(function (evaluationReport) {
    console.log('Accuracy against the testset:\n', so(evaluationReport));
});

enter code here

Upvotes: 0

Views: 268

Answers (1)

anoc
anoc

Reputation: 26

Are your labels 1 and -1? If so, you will need to know those classes for your test data as well. The point of testing your classifier is to see how well it can predict unseen data.

As a small example you could build your classifier with your training data: x_train = [65, 134], [70,98]....... [79, 112], [83, 91] y_train = [ 1, 1, ....-1, -1]

Then you test your classifier by passing in your test data. Say you pass in the first three examples in your test data and it makes the following predictions. [65, 111] --> 1 [68, 88] -->-1 [70,103] -->-1 You then tally up how many pieces of test data it predicted right, but in order to do that you need to know the classes of your test data to begin with. If you don't have that, perhaps you want to try cross-validation on your training data.

Upvotes: 1

Related Questions