-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speed up dcf calculation #169
Comments
Ref: Speed up dcf calculation #169 Scipy.spatial Qhull does not seem to work in parallel in multiple threads, using processes makes it~4x faster on my notebook. Would still prefer a gpu version.
Ref: Speed up dcf calculation #169 Scipy.spatial Qhull does not seem to work in parallel in multiple threads, using processes makes it~4x faster on my notebook. Would still prefer a gpu version.
The issue is that the number of vertices in each cell is different. This makes vectorization difficult. |
For 3D there is either https://github.com/eleftherioszisis/tess which unfortunately does not have CD setup (there is an unmerged uncommented PR implementing compilation and pushing to pypi) or maybe something will come out of this request at scipy scipy/scipy#20118 |
Scratch that. At least the "Tessa" python wrapper is way too inefficient. Maybe directly inferacing voro could be fast enough.. |
Ref: Speed up dcf calculation #169 Scipy.spatial Qhull does not seem to work in parallel in multiple threads, using processes makes it~4x faster on my notebook. Would still prefer a gpu version.
Currently the dcf calculation is very slow, specifically this part: https://github.com/PTB-MR/mrpro/blob/025bc796b845aabdf255b2ed584f0ff26ae0cb28/src/mrpro/data/_DcfData.py#L118C18-L118C36
I tried to exchange the
ThreadPoolExecutor
withmultiprocessing.Pool
but this did not make any change.Possible ways to solve this: Find a pytorch compatible way of calculation Voronoi cells or at least replace
ConvexHull(v).volume
by a function running on GPUThe text was updated successfully, but these errors were encountered: