Reputation: 10383
I want to classify the data shown in the image:
To do so I'm trying to use a SVM:
X = df[['score','word_lenght']].values
Y = df['is_correct'].values
clf = svm.SVC(kernel='linear', C = 1.0)
clf.fit(X,Y)
clf.coef_
clf = svm.SVC(kernel='linear')
clf.fit(X, Y)
This is the result I'm getting:
But I would like a more flexible model, like the red model, or if possible something like the blue line. With which parameters could I play in order to get closer to the desired response?
Also, I don't quite understand how is the scale of the vertical (yy) axes is created, it is too big.
w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(0.85, 1)
yy = (a * xx - (clf.intercept_[0]) / w[1])*1
Upvotes: 2
Views: 1392
Reputation: 103
At first instance, if the data have a reasonable size you can try to perform a GridSearch, Since apparently you are working with text, consider this example::
def main():
pipeline = Pipeline([
('vect', TfidfVectorizer(ngram_range=(2,2), min_df=1)),
('clf',SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3,
gamma=1e-3, kernel='rbf', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False))
])
parameters = {
'vect__max_df': (0.25, 0.5),
'vect__use_idf': (True, False),
'clf__C': [1, 10, 100, 1000],
}
X, y = X, Y.as_matrix()
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.5)
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='accuracy')
grid_search.fit(X_train, y_train)
print 'Best score: %0.3f' % grid_search.best_score_
print 'Best parameters set:'
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
if __name__ == '__main__':
main()
Note that I vectorized my data (text) with tf-idf. scikit-learn project also implements RandomizedSearchCV. Finally, there are also other interesting tools like Tpot project which use genetic programming, hope this helps!.
Upvotes: 4