You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Similarly to the ray dashboard, add capability to generate a flame graph for each task submitted to the dask worker pool, and perhaps, also, some way to view the current stack trace of running processes. I know that ray uses py-spy, but I guess it would be worth taking a look at other profiling tools, too, perhaps like scalene or austin
Motivation: In a concurrent, multiprocessing environment, like, e.g., when using dask clusters to run trials, it's hard to pinpoint exactly where bottlenecks occur, and where workers have to wait on each other or on the main process to release access to shared resources, or to provide some results, like, e.g., suggestions of what configs to try next, which might hang because of high contention either, e.g., on the surrogate model, or on the multi-fidelity intensifier, or something else altogether.
Check out #1169, #1170 and #1178 (comment) for the context in which this feature request has arisen.
LE: Since Python 3.12, the interpreter can generate profiles on linux. Here is a list of some profilers that are currently popular.
The text was updated successfully, but these errors were encountered:
Similarly to the ray dashboard, add capability to generate a flame graph for each task submitted to the
dask
worker pool, and perhaps, also, some way to view the current stack trace of running processes. I know thatray
usespy-spy
, but I guess it would be worth taking a look at other profiling tools, too, perhaps like scalene or austinMotivation: In a concurrent, multiprocessing environment, like, e.g., when using
dask
clusters to run trials, it's hard to pinpoint exactly where bottlenecks occur, and where workers have to wait on each other or on the main process to release access to shared resources, or to provide some results, like, e.g., suggestions of what configs to try next, which might hang because of high contention either, e.g., on the surrogate model, or on the multi-fidelity intensifier, or something else altogether.Check out #1169, #1170 and #1178 (comment) for the context in which this feature request has arisen.
LE: Since Python 3.12, the interpreter can generate profiles on linux. Here is a list of some profilers that are currently popular.
The text was updated successfully, but these errors were encountered: