-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue when verbosity > 0 and a test does not pass no my Mac #28
Comments
About the test failure, I've just made a comparison between EvoTrees.jl and MLJXGBoostInterface.jl on the iris dataset below. I've tried to keep the hyperparameters similar but can't see much difference by modifying them, please let me know if there is anything wrong with this experiment. using MLJ, MLJXGBoostInterface, EvoTrees, DataFrames, StableRNGs
rng = StableRNG(123)
iris = DataFrame(load_iris())
X = iris[!, Not(:target)]
y = iris[!, :target]
mach1 = machine(
XGBoostClassifier(num_round=100, max_depth=6, lambda=0., max_bin=32, eta=0.1),
X, y)
MLJ.evaluate!(mach1, measure=log_loss, resampling=StratifiedCV(nfolds=3, rng=rng), verbosity=0)
│ measure │ operation │ measurement │ 1.96*SE │ per_fold │ mach2 = machine(
EvoTreeClassifier(nrounds=100, max_depth=6, lambda=0., nbins=32, eta=0.1),
X, y)
MLJ.evaluate!(mach2, measure=log_loss, resampling=StratifiedCV(nfolds=3, rng=rng), verbosity=0) │ measure │ operation │ measurement │ 1.96*SE │ per_fold │ 3.42 vs 0.355 seems like a huge gap in performance. |
Can you show the exact error messages for the failures on linux? I test on linux with Julia 1.8.3 locally and CI/CD runs on linux with 1.8.3 and 1.6, so it's surprising to me that tests would fail on linux. |
Oh, sorry I see you linked the line. That's pretty weird. I guess it might be stochastic since we are not using a stable RNG. We probably need to make that line less stringent, maybe make it |
Yes that is also what I thought which is why I carried out the perf comparison test on the iris dataset. Any idea that would explain the result? |
Oh, this package does use StableRNG... in that case I'm also confused. It's possible that version differences in |
It is XGBoost_jll v1.7.2+0 which seems to be the latest |
Could you try to run my previous example (iris dataset) to see if you get similar results? Otherwise I would need to work towards something more reproducible. |
@ExpandingMan could you also comment on the verbosity logic. It also looks suspicious to me. |
Gah, I kind of hate github's auto-closing feature. |
Both these issues should be solved by #29. Sorry about that, not sure how I didn't realize how serious those issues were. Please close if all looks good. |
The posted issue is resolved. |
Hi,
I think there's been quite some work going on here, thank you for porting XGBoost to Julia. I think there is an issue when the verbosity is set to be greater than 0, to reproduce:
Perhaps
(verbosity ≤ 0 || isnothing(model.watchlist))
instead of(verbosity ≤ 0 && isnothing(model.watchlist))
was meant here.Also, I don't think this is related but this test, line 114, does not pass locally on my Mac with Julia 1.8.1 or on a Linux distro with Julia 1.8.2.
The text was updated successfully, but these errors were encountered: