shda
shda

Reputation: 734

Different results for XGBoost using python api and scikit-learn wapper

Here is example for agaricus sample data:

import xgboost as xgb
from sklearn.datasets import load_svmlight_files

X_train, y_train, X_test, y_test = load_svmlight_files(('agaricus.txt.train', 'agaricus.txt.test'))

clf = xgb.XGBClassifier()
param = clf.get_xgb_params()
clf.fit(X_train, y_train)
preds_sk = clf.predict_proba(X_test)

dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test)
bst = xgb.train(param, dtrain)
preds = bst.predict(dtest)

print preds_sk
print preds

And the results are:

[[  9.98860419e-01   1.13956432e-03]
 [  2.97790766e-03   9.97022092e-01]
 [  9.98816252e-01   1.18372787e-03]
 ..., 
 [  1.95205212e-04   9.99804795e-01]
 [  9.98845220e-01   1.15479471e-03]
 [  5.69522381e-04   9.99430478e-01]]

[ 0.21558253  0.7351886   0.21558253 ...,  0.81527805  0.18158565
  0.81527805]

Why are the results different? It seems that all default parameter values are the same. And I don't mean here that predict_proba returns [prob, 1- prob].

xgboost v0.6, scikit-learn v0.18.1, python 2.7.12

Upvotes: 2

Views: 1297

Answers (1)

slonopotam
slonopotam

Reputation: 1710

You need to pass num_boost_round parameter directly to xgb.train:

bst = xgb.train(param, dtrain,num_boost_round=param['n_estimators'])

because otherwise it ignores param['n_estimators'] and uses default number of estimator, which is currently 10 for xgb.train interface, while default for n_estimators is 100.

Upvotes: 3

Related Questions