-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems enabling MPI : (Abort trap: 6) #45
Comments
Hi @tfish13, thank you for reporting the issue. It seems to me that the problem occurs at the moment MultiNest exits. Perhaps MPI is terminated twice? |
Dear tfish, |
Dear Sir, |
Please reopen if the problems still exist. You need to install mpi4py to use MPI functionality. |
Dear @JohannesBuchner, I was previously able to run multinest on my mac for many months. Then I stepped away from the project and came back to re-install it on a (three) new systems. Now, openmpi ( Could we please re-open this issue? Or, do you know of a work around from this? Thank you, Jonathan |
Update: I previously installed pymultinest from All small tests ran successfully; and I am currently running a long integration that has not yet crashed. It may be an issue between the version on PyPi vs this GitHub repo. I hope that helps anyone else who may be having these problems. |
Did you use version 2.6? I don't recall changing anything recently w.r.t. the multinest calls, but maybe it doesn't like wheel packages. |
I installed: (1) openmpi version 2.0.4 (manually; via command line) That seemed to work the best. If I installed PyMultiNest via
|
I have the same issue after installing openmpi, Multinest and PyMultiNest as per the instructions on [http://astrobetter.com/wiki/MultiNest+Installation+Notes]. |
I was able to get everything to work together, but I had to mix 2 streams of instruction sets. Here is how my versioning was successful: Installing MPI with Fortran Support ./configure --prefix=$HOME/openmpi LD_LIBRARY_PATH=$DYLD_FALLBACK_LIBRARY_PATH 2>&1 | tee config.out
make -j 4 2>&1 | tee make.out
make install 2>&1 | tee install.out
export PATH=$HOME/openmpi/bin:$PATH
echo "export PATH=$HOME/openmpi/bin:$PATH" >> ~/.profile
## OR, if you use 'bash_profile'
echo "export PATH=$HOME/openmpi/bin:$PATH" >> ~/.bash_profile If you get problems telling you it was "unable to run and compile a simple Fortran program" check the output that it is using the Fortran compiler you expect and that you have your associated Check that you successfully built the OpenMPI Fortran compiler with bash$ mpif90 -v If there is no error at the end of the output you should be fine. I think that "brew install openmpi" and "port install openmpi" are having trouble with MultiNest; but (and I am guessing) the manual (local) install openmpi may communicate better with MultiNest. Everything else I followed from the AstroBetter website [http://astrobetter.com/wiki/MultiNest+Installation+Notes] The following are copied directly from the AstroBetter website
git clone https://github.com/JohannesBuchner/MultiNest.git
cd MultiNest/build/
cmake ..
make
sudo make install git clone https://github.com/JohannesBuchner/PyMultiNest.git
cd PyMultiNest
python setup.py install |
I would be glad if someone could help improve the documentation with MacOS. Since I don't own one, it is difficult for me to debug issues. Improving the documentation here would be great: I suspect there are several configurations and OS versions that need to be considered? |
Indeed, my versions are: Python 3.6.4 |
Moreover, I did try to install with |
I am running a PE job using pymultinest in a slurm cluster. For some reason the MPI job is getting killed at the end after doing all the calculations! This is the error I am getting: ===================================================================================
|
Hello!
PyMultinest is running fine on my Mac laptop. However, when I try enabling MPI, I'm getting some problems. Running the pymultinest_demo_minimal.py script results in the following:
bash-3.2$ mpirun -np 2 python pymultinest_demo_minimal.py
...
Acceptance Rate: 0.722698
Replacements: 1350
Total Samples: 1868
Nested Sampling ln(Z): 148.573654
Importance Nested Sampling ln(Z): 235.196277 +/- 0.411084
python(44441,0x7fff73f6a310) malloc: *** error for object 0x10104ea08: incorrect checksum for freed object - object was probably modified after being freed.
*** set a breakpoint in malloc_error_break to debug
Acceptance Rate: 0.716113
Replacements: 1400
Total Samples: 1955
Nested Sampling ln(Z): 155.709924
Importance Nested Sampling ln(Z): 235.109719 +/- 0.393148
...
Acceptance Rate: 0.675898
Replacements: 2050
Total Samples: 3033
Nested Sampling ln(Z): 214.719858
Importance Nested Sampling ln(Z): 235.306598 +/- 0.175543
mpirun noticed that process rank 0 with PID 44441 on node my-computer exited on signal 6 (Abort trap: 6).
I've checked previous MPI issues here to see if others have experienced this, but found nothing similar. Additionally, original multinest examples (eggboxC, etc.) using mpirun are successful.
The text was updated successfully, but these errors were encountered: