-
-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: colapply
only raises errors during parallel optimization
#886
Comments
colapply
only reports errors during parallel optimizationcolapply
only raises errors during parallel optimization
@mb706 This looks comparable to #884.
|
This is actually an mlr3 issue. I will see if I can find a way to trigger this without mlr3pipelines. There should be a way to do run the necessary invisible(future.apply::future_lapply(seq_len(4), function(x) library("mlr3verse"))) @advieser can you see if this is a viable workaround? We need to talk about the deeper issue here internally. |
@Vinnish-A First of all, thanks for raising the issue! The following code should work: library(dplyr)
library(survival)
library(mlr3verse)
library(mlr3proba)
library(mlr3extralearners)
lung_filter = lung |> select(-sex,-ph.ecog)
task = TaskSurv$new(id="lung",backend = lung_filter,time = "time",event = "status")
learner_xgbcox = lrn('surv.xgboost.cox')
learner_xgbcox$param_set$set_values(
tree_method = 'hist',
device = 'cuda',
booster = 'gbtree',
nrounds = to_tune(p_int(128, 512, tags = 'budget')),
eta = to_tune(1e-4, 1, logscale = TRUE),
gamma = to_tune(1e-5, 7, logscale = TRUE),
max_depth = to_tune(1, 20),
colsample_bytree = to_tune(1e-2, 1),
colsample_bylevel = to_tune(1e-2, 1),
lambda = to_tune(1e-3, 1e3, logscale = TRUE),
alpha = to_tune(1e-3, 1e3, logscale = TRUE),
subsample = to_tune(1e-1, 1)
)
prep_xgbcox = po('removeconstants') %>>%
po('colapply', applicator = as.integer, affect_columns = selector_type('factor'))
glearner_xgbcox = prep_xgbcox %>>% learner_xgbcox
tuner_xgbcox = tnr('hyperband', eta = 2, repetitions = 1)
instance_xgbcox = ti(
task = task,
learner = glearner_xgbcox,
resampling = rsmp('cv', folds = 3),
measures = msr('surv.cindex'),
terminator = trm('evals', n_evals = 25)
)
future::plan('multisession', workers = 4)
invisible(future.apply::future_lapply(seq_len(4), function(x) library("mlr3fselect")))
tuner_xgbcox$optimize(instance_xgbcox) This uses the workaround from @mb706. Also note, that I loaded |
Thank you very much for your reply! The modified code works for me. |
description
When using multisession with future, graph learner containing
colapply
raises an error, which never exists before and exists in sequential mode.error message
OS
windows, wsl2, ubuntu22.04
version
mlr3verse updated to newest:
how to reproduce
The text was updated successfully, but these errors were encountered: