Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

create separate solver classes for fields and particles #57

Open
alecjohnson opened this issue Oct 16, 2013 · 0 comments
Open

create separate solver classes for fields and particles #57

alecjohnson opened this issue Oct 16, 2013 · 0 comments

Comments

@alecjohnson
Copy link
Contributor

The c_Solver class should be subdivided into two classes: a FieldSolver class and a ParticleSolver class.

This fundamental restructuring is being prompted by the port to the DEEP architecture, where we will solve fields on the cluster and particles on the booster. However, separation into two separate solvers will have benefits for improved flexibility that extend far beyond DEEP. The benefits include:

  • ability to replace the particles solver with a fluid model. This is important for coupling iPic3D to fluid models, as SWIFF aims to do. The implicit moment method can be made asymptotic-preserving with respect to fluid models by relaxing the particle distribution to a Maxwellian distribution, e.g. by resampling particles in each mesh cell.
  • ability to replace the fields solver with another fields solver. For the purpose of coupling to MHD, it is important to be able to replace the field solver with an MHD field solver that calculates E from Ohm's law rather than evolving it (or smoothly transition to such a solver). It would also be possible to replace the field solver with an IMEX scheme. In both these cases, the field solve is completely local and involves no communication, so the field solve along with everything else could be kept on the booster side.
  • ability to use a separate set of MPI processes for fields and particles and to use a different number of MPI processes for fields versus for particles so as to be optimally tuned for these two very different tasks.

The data that needs to be exchanged between these two classes is:

  • FieldSolver → ParticleSolver:

    the electric field: Ex, Ey, Ez.

    While the magnetic field is also needed on the particle side to push the particles, the magnetic field is easily updated from the electric field, and so there is no need to transfer the magnetic field data except on the first iteration of the solver. It is conceivable that magnetic field data might need to be sent again would be due to differences in the updated fields on each side due to accumulated machine arithmetic error, but such error should accumulate very slowly.

  • ParticleSolver → FieldSolver:
    • For IMM, the data that must be sent consists of the hatted moments Ĵx, Ĵy, and Ĵz and the density ρs of each species. The densities ρs are used on the cluster side to construct the "implicit susceptibility" tensor χ that characterizes the response of the average current to the average electric field. Note that the hatted moment ρ̂ can be constructed from Ĵ and ρs. In the case of a large number (> 8) of moments (or of a dusty plasma), one would instead calculate the 9 components of χ on the booster side and send them.
      The arrays in the code that need to be sent are  <code>Jxh</code>,  <code>Jyh</code>,  <code>Jzh</code>, and <code>rhons</code>.  
      

      Note that rhoc (used in the Poisson correction) is computed from rho,
      that rho is computed by summing over rhons, and that rhoh is easily computed from rhoc and Jxh.

Follow-up: communicate the magnetic field separately, in addition to the electric field. I have decided to modify the above proposal to calculate and then send the smoothed versions of E or B needed to push the particles as soon as E or B is updated and then use this data to construct the SoA fields needed to push particles upon receiving this data on the Booster side.

Justification. While it is technically possible to communicate only the electric field from the field solver to the particle solver, it will be simpler and almost certainly more efficient to communicate both fields. The reason is that the field used to push particles needs to be a modified version of the field evolved in the field solver. Smoothing needs to be applied to the field several (now 3) times before it can be used to evolve particles. Each of these smoothings requires exchange of boundary data. Moreover, updating the magnetic field from the electric field requires calculating a curl, which also requires exchange of boundary data. Even a 10x10x10 mesh is half boundary cells, and unless the dimensions of the subgrid are unfeasibly large, a substantial fraction of the mesh cells are boundary cells. The transformations of the field data to prepare it to be used by the particle solver involve successive synchronizations and are more communication-intensive then the mere task of transferring the magnetic field components. By directly transferring the magnetic field along with the electric field, we can do all transformations on the Cluster, where we expect them to be performed more efficiently, without repeating any of them. We could even do in the cluster the transposition of field data to the AoS format (see issue #52) preferred for cache performance in the particle mover. But on the other hand, transposition is much less expensive than communication, and doing the transposition on the Booster will allow us to send the magnetic field separately and will not require communicating the 33% additional garbage data needed in the AoS format for alignment. In the current algorithm, the old field is used to push particles (for consistency with the implicit moment field update), so sending the magnetic field as soon as it is updated means that communication of the magnetic field can occupy an entire cycle of the algorithm before it needs to complete.

Communicating moment boundary data on the Cluster side.
After accumulating moments on the Booster, hatted moments need to be computed on the Booster and communicated to the field solver running on the Cluster. Computing hatted current involves evaluating the divergence of pressure tensor quantities, which requires communicating boundary data. This boundary communication can be delayed and reduced, however, until the hatted moments have reached the Cluster, if one is willing to accept loss of numerical precision due to summing terms involving the divergence of the pressure tensor in boundary cells or nodes that almost cancel. (To avoid this cancelation error one must communicate boundary node data of each pressure tensor component of each species.) Using the way of delayed communication, only five moments need boundary communication, but it would be necessary to communicate and sum over a widened halo that is three layers thick, including not just boundary nodes but ghost nodes and the first layer of interior nodes. Communicating node values of 6+6=12 pressure tensor components avoids loss of numerical precision and reduces the number of layers of nodes that must be communicated at the end from three to the single layer of shared nodes. Communicating the node values of the other moments or of the divergence of the pressure tensor does not need to be done prior to summing the hatted moments (reducing the number of communications needed by 6) and communicating them to the Cluster (where it can be done faster).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant