-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
two unit tests fail on Ubuntu 16.04 #10
Comments
Hi Paul Looks like you don't have a working parallel LU solver? What does list_lu_solver_methods() tell you? superlu_dist should work in parallel. Thanks for the typo, I'll fix it. |
Hi Mikael, Thank you. I have checked the LU solver configuration. list_lu_solver_methods() gives:
Looking at the dolfin code la/PETScLUSolver.cpp it appears that superlu_dist would be the chosen solver in a 2016.2 HashDist installation (with no MUMPS):
Possibly the divergence seen in the unit tests is related to the version of superlu_dist? The version installed by HashDist is version 5.1.0. One other test I have made was to try setting the "mesh_partitioner" parameter to "ParMETIS". However, this did not change the behaviour (divergence after the first iteration). Paul. |
Ok, |
Hi,
I have just updated FEniCS and Oasis to 2016.2 and re-run the unit tests (cd tests; py.test). I have noticed that two of the tests that previously passed now fail.
The two failing tests are the ones that are skipped for Darwin installations; test_default_mpi_Coupled() and test_naive_mpi_Coupled(). FEniCS has been installed with HashDist and Oasis updated with git.
One difference between FEniCS 2016.1 and FEniCS 2016.2 that might be relevant is the set of linear solvers that is made available. In FEniCS 2016.1, installed with HashDist, MUMPS is included. In FEniCS 2016.2 it is not. Do the Oasis tests need MUMPS on Linux as well?
I have tried the following command to test:
$ mpirun -np 2 python NSCoupled.py problem=Cylinder testing=True
The process responds with:
Start Newton iterations flow
Iter 1, Error = 0.000415191268604
and then hangs, with both CPUs fully occupied.
In the solvers/NSCoupled/init.py file, where the solver is selected, I have changed line 46 to specify LUSolver('superlu_dist'). However, this appears to have no effect. Maybe superlu_dist is selected automatically by PETSc?
Also, along the way I noticed a typo in problems/Cylinder.py. In line 11 'Linus' should be 'Linux'.
Best,
Paul.
The text was updated successfully, but these errors were encountered: