-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stack cache and acceleration #767
Conversation
Codecov Report
@@ Coverage Diff @@
## dev #767 +/- ##
==========================================
+ Coverage 85.85% 85.96% +0.11%
==========================================
Files 36 36
Lines 3451 3471 +20
==========================================
+ Hits 2963 2984 +21
+ Misses 488 487 -1
Continue to review full report at Codecov.
|
@olivierlabayle This is great, thank you. Since this adds
Note that Probably we can just wrap these tests in an I don't think we need to check both options give same results. You already do this for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My comments and test suggestions not withstanding, I'm already happy with this "draft" PR.
@ablaom Thank you for this early review. I had indeed only marked this as a draft because I wanted to be a bit more thorough about the testing of the new multithreading feature. I will need to have a proper look at the |
I have incremented the testset you suggested with |
👍🏾 Probably that macro was written before that syntax was available?? I didn't know about it myself. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm happy with this, @olivierlabayle
I'm going to recommend that @OkonSamuel also reviews this, as he is more familiar with multithreading issues. He is pretty busy, but I think it's worth getting an expert to look over this.
Following suggestion of @OkonSamuel we should test multi-threading works for a variety of component models. I am working on some enhancements of MLJTestIntegration.jl to automate such a test, and will check back here when that is ready. |
Great idea, I'll be waiting then! |
@@ -54,7 +58,7 @@ const Stack{modelnames, inp_scitype, tg_scitype} = | |||
ProbabilisticStack{modelnames, inp_scitype, tg_scitype}} | |||
|
|||
""" | |||
Stack(;metalearner=nothing, resampling=CV(), name1=model1, name2=model2, ...) | |||
Stack(;metalearner=nothing, resampling=CV(), name1=model1, cache=true, acceleration=CPU1(), name2=model2, ...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doc-string got a little mangled, with the model1 and model2 being separated. As this has grown passed the length of the line, how about we not list all the options here:
Stack(;metalearner=nothing, resampling=CV(), name1=model1, cache=true, acceleration=CPU1(), name2=model2, ...) | |
Stack(;metalearner=nothing, name1=model1, name2=model2, keyword_options...) |
add stack_evaluation; needs JuliaAI/MLJBase.jl#767 rm target_scitype arg from stack_evaluation put stack test into test() oops fix some bugs separate out :accelerated_stack_evaluation test more tweaks oops
@olivierlabayle I needed to rebase this and am closing in favour of #785. The rebase includes a fix for the doc string marked above. |
Following up on: #759
This adds two fields to the Stack to enable/disable submachines caching and the network's acceleration:
cache
: Whether submachines will be caching data or not.acceleration
: Acceleration mode of the learning network.added by @ablaom: This PR allows one to train a learning network node using multithreading, as in
fit!(node, acceleration=CPUThreads())
.