Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MPI Parallelization? #45

Open
muhammadhasyim opened this issue May 28, 2020 · 2 comments
Open

MPI Parallelization? #45

muhammadhasyim opened this issue May 28, 2020 · 2 comments

Comments

@muhammadhasyim
Copy link

Dear @martinjrobins,

Thank you for developing Aboria! I've thoroughly enjoyed using this library for analyzing molecular dynamics data in my work. One of the things that I've been curious about is if you are thinking about developing Aboria to support MPI parallelization.

I suppose this is a "bizarre" request because it doesn't quite fit the goal of Aboria. After all, it's a lightweight header-only library and GPU parallelization is supported externally through Thrust). A lot of my work requires normal mode analysis and numerical linear algebra on a dataset derived from particle simulations. The libraries I've been using to efficiently perform these calculations (PETSc and SLEPc) for large systems is parallelized using MPI.

Of course, I can implement Aboria on an MPI+OpenMP program (that's what I've been doing actually) but in terms of memory management, I feel that this is relatively awkward. For instance, it would be great if the memory on particle data can be distributed.

Anyways, I apologize if this is a weird request/question. But I hope you may consider my thoughts as possible future features on Aboria!

All the best,

Muhammad

@martinjrobins
Copy link
Collaborator

hi @muhammadhasyim, thanks for getting in touch regarding Aboria and glad its been helping with your work! :) I don't have any plans of adding MPI parallelization at this current time, but it is certainly something that would be useful to add, and I'd be happy if anyone else wanted to give it a try!

In terms of how it could be done, most of the spatial data structures and particle container are based on having an underlying vector type, with associated algorithms (e.g. transform, accumulate, scan etc) that operate on that vector. That is how it swaps between using std::vector and thrust::device_vector. So the challenge would be to find/implement an MPI distributed vector that satisfies the std::vector api, and implement the required distributed algorithms. I think HPX were heading in this direction the last time I looked (https://github.com/STEllAR-GROUP/hpx), but not sure how far they got.

@muhammadhasyim
Copy link
Author

Hi @martinjrobins, thanks for the reply!

The HPX project is very interesting. This is the first time I've heard about it actually and I'm very impressed with what they have. The only other project that may serve our MPI parallelization purpose is DASH (https://github.com/dash-project/dash) which I discovered a few months ago through this paper (https://arxiv.org/pdf/1610.01482.pdf). On the surface, DASH is smaller in scope but perhaps it'll fit our purpose better(?). Anyways, if either library has some vector type with all the algorithms needed, then I'm perfectly fine with using either of them to implement the MPI distributed vector.

I'll stick with what I have for now. I am planning to start up a new project for some Monte Carlo simulations so perhaps I'll think about this topic again when I get there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants