Reputation: 1
I am trying to fit an XgBoost model within the MLR framework. While the framework is fairly well documented, there are some specifics of the XgBoost library that I cannot replicate within MLR, one in particular being the "base margin". In XgBoost library I would just set :
db_xgbmatrix <- xgb.DMatrix(db)
setinfo(db_xgbmatrix , "base_margin", margin)
and then I can just go on with the training of the model; whereas, in MLR, once I create the task and the learner:
tsk <- makeRegrTask(data = db, target = target_var)
lrn <- makeLearner("regr.xgboost", predict.type = "response", eta = 0.1,
max_depth = 8, min_child_weight = 20,
subsample = 0.75, colsample_bytree = 0.75,
nrounds = 100, nthread = cl_n, objective="count:poisson")
I'm not quite sure of where the base margin should be set. Any ideas? Is that feature implemented, and hidden somewhere? Thank you all in advance
Upvotes: 0
Views: 1554
Reputation: 609
This is finally supported in mlr3
and mlr3learners
for xgboost
(currently using the latest github versions).
See example here. Setting offset in the task means also that it is applied automatically during internal tuning/validation of xgboost
as well.
Upvotes: 0
Reputation: 109262
This isn't implemented in mlr. We don't have any plans to support it, but you're always welcome to contribute a pull request.
Upvotes: 0