Skip to content
Adam O'Brien edited this page Feb 14, 2017 · 9 revisions

#About

Phase is an unstructured finite-volume immersed boundary framework developed in C++. The purpose of the code is to provide a lightweight and simple platform for testing and development of new finite volume methods. A lot of effort has been geared towards developing immersed boundary methods which work in conjunction with modern multiphase methods.

#Objective

One of the primary goals of Phase is to allow the programmer to work as closely as possible to the equations which they are solving. The most important class interfaces in Phase include the Solver interface, used for managing related equations, the FiniteVolumeEquation interface, used for defining, managing and operating on field equations, the FiniteVolumeField interface, used for managing physical quantities in 2D space, and the FiniteVolumeGrid interface, used for defining a discretized space.

The use of modern C++ interfaces allows for equations in Phase to be defined almost as they're written on paper. For example, to solve the Navier-Stokes momentum equation using a Crank-Nicolson scheme, simply write

uEqn = (fv::ddt(rho, u) + cn::div(rho, u, u) == cn::laplacian(mu, u) - source::grad(p));
uEqn.solve();

where rho, u, mu and gradP are FiniteVolumeField objects. The use of C++ templating allows for different field data types to be used with Phase FiniteVolumeEquation objects. Many default finite-volume discretization operators already exist in Phase, however adding custom FiniteVolumeEquation operators is straightforward. If one wishes to define the pressure correction equation, simply write

pCorrEqn == (fv::laplaclian(1/d, pCorr) == source::div(u));
pCorrEqn.solve();

Other, more specialized FiniteVolumeEquation operators are provided for more niche discretizations such as for multiphase flows and immersed boundary methods.

In addition to being as high level as possible in terms of syntax, the code should also be efficient. Parallelism is accomplished through the Message Passing Interface (MPI), and support is provided for a number of popular sparse matrix libraries including Eigen3 (serial only), HYPRE, Petsc and Trilinos. Scaling has been tested up to 64 processes to date, with an average scaling efficiency of 85%. As much as possible, parallelism is abstracted away from the programmer and user. No special input or preliminary domain decomposition is required, simply run Phase with the desired number of processors and everything is taken care of!

Clone this wiki locally