Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eliminate unnecessary MPI_Allreduce calls in GMRES solver #74

Open
alecjohnson opened this issue Aug 18, 2014 · 0 comments
Open

eliminate unnecessary MPI_Allreduce calls in GMRES solver #74

alecjohnson opened this issue Aug 18, 2014 · 0 comments

Comments

@alecjohnson
Copy link
Contributor

GMRES() contains the following loop:

      const double av = normP(w, xkrylovlen);
      for (register int j = 0; j <= k; j++) {
        v = V[j];
        H[j][k] = dotP(w, v, xkrylovlen);
        addscale(-H[j][k], w, v, xkrylovlen);
      }

This is an inner loop within an outer loop where k runs from 0 to m-1 (the code chooses m=19).
As a result, 210 calls are made to MPI_Allreduce.
In fact, w is orthogonal to all the vectors v, so the dot products can be reduced in parallel with a single call to MPI_Allreduce():

      for (register int j = 0; j <= k; j++) {
        y[j] = dot(w, V[j], xkrylovlen);
      }
      y[k+1] = norm2(w,xkrylovlen);
      MPI_Allreduce(MPI_IN_PLACE, y, (k+2),
        MPI_DOUBLE, MPI_SUM, MPI_COMM_WORLD);
      for (register int j = 0; j <= k; j++) {
        H[j][k] = y[j];
        addscale(-H[j][k], V[k+1], V[j], xkrylovlen);
      }
      double av = sqrt(y[k+1]);

This reduces the number of MPI_Allreduce calls per restart of the GMRES algorithm from 250 to 40.

alecjohnson added a commit to alecjohnson/iPic3D that referenced this issue Aug 18, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant