Skip to content

Latest commit

 

History

History

gromacs

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 

GROMACS

Overview

This AMD Container is based on the 2022 release of GROMACS modified by AMD. This container only supports up to an 8 GPU configuration.

GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers. For more information about GROMACS, visit gromacs.org.

For more information on the ROCm™ open software platform and access to an active community discussion on installing, configuring, and using ROCm, please visit the ROCm web pages at www.AMD.com/ROCm and ROCm Community Forum

Notes:

  • This recipe is based on a fork of the GROMACS project written for AMD GPUs - it is not an official release by the GROMACS team
  • The source of the GROMACS fork is publicly available here: https://github.com/ROCmSoftwarePlatform/Gromacs
  • This code base is not maintained or supported by the GROMACS team
  • This code base is not developed by the GROMACS team

Single-Node Server Requirements

System Requirements

Build Recipes

Running Gromacs Benchmarks

Three example benchmarks have been provided in this repository:

Performance Tuning for Threaded MPI
Optimal performance for each benchmark and GPU/GCD configuration can be tuned by:

  • MPI ranks/threads (-ntmpi)
  • OpenMP threads (-ntomp)
  • GPUs (-gpu_id)
  • Neighbor list update frequency (-nstlist)
  • More performance options found at Gromacs' documentation: "Getting good performance from mdrun"
  • The offloading of bonds to GPUs (-bonded gpu) is not always recommended for optimal performance.

Examples With Threaded MPI


ADH DODEC Benchmark Instructions

ADH DODEC

Extract the binary system topology, parameter, coordinates, and velocity file.

cd .benchmarks/adh_dodec
tar -xvf adh_dodec.tar.gz
1 GPU/GCD
gmx mdrun -pin on \
            -nsteps 100000 \
            -resetstep 90000 \
            -ntmpi 1 \
            -ntomp 64 \
            -noconfout \
            -nb gpu \
            -bonded cpu \
            -pme gpu \
            -v \
            -nstlist 100 \
            -gpu_id 0 \
            -s topol.tpr
2 GPUs/GCDs
gmx mdrun -pin on \
            -nsteps 100000 \
            -resetstep 90000 \
            -ntmpi 2 \
            -ntomp 32 \
            -noconfout \
            -nb gpu \
            -bonded gpu \
            -pme gpu \
            -npme 1 \
            -v \
            -nstlist 200 \
            -gpu_id 01 \
            -s topol.tpr
4 GPUs/GCDs
gmx mdrun -pin on \
            -nsteps 100000 \
            -resetstep 90000 \
            -ntmpi 4 \
            -ntomp 16 \
            -noconfout \
            -nb gpu \
            -bonded gpu \
            -pme gpu \
            -npme 1 \
            -v \
            -nstlist 200 \
            -gpu_id 0123 \
            -s topol.tpr
8 GPUs/GCDs
gmx mdrun -pin on \
            -nsteps 100000 \
            -resetstep 90000 \
            -ntmpi 8 \
            -ntomp 8 \
            -noconfout \
            -nb gpu \
            -bonded gpu \
            -pme gpu \
            -npme 1 \
            -v \
            -nstlist 150 \
            -gpu_id 01234567 \
            -s topol.tpr

CELLULOSE NVE Benchmark Instructions

CELLULOSE NVE

Extract the binary system topology, parameter, coordinates, and velocity file.

cd .benchmarks/cellulose_nve
tar -xvf cellulose_nve.tar.gz
1 GPU/GCD
gmx mdrun -pin on \
            -nsteps 100000 \
            -resetstep 90000 \
            -ntmpi 1 \
            -ntomp 64 \
            -noconfout \
            -nb gpu \
            -bonded cpu \
            -pme gpu \
            -v \
            -nstlist 100 \
            -gpu_id 0 \
            -s topol.tpr
2 GPUs/GCDs
gmx mdrun -pin on \
            -nsteps 100000 \
            -resetstep 90000 \
            -ntmpi 4 \
            -ntomp 16 \
            -noconfout \
            -nb gpu \
            -bonded gpu \
            -pme gpu \
            -npme 1 \
            -v -nstlist 200 \
            -gpu_id 01 \
            -s topol.tpr
4 GPUs/GCDs
gmx mdrun -pin on \
            -nsteps 100000 \
            -resetstep 90000 \
            -ntmpi 4 \
            -ntomp 16 \
            -noconfout \
            -nb gpu \
            -bonded gpu \
            -pme gpu \
            -npme 1 \
            -v \
            -nstlist 200 \
            -gpu_id 0123 \
            -s topol.tpr
8 GPUs/GCDs
gmx mdrun -pin on \
            -nsteps 100000 \
            -resetstep 90000 \
            -ntmpi 8 \
            -ntomp 8 \
            -noconfout \
            -nb gpu \
            -bonded gpu \
            -pme gpu \
            -npme 1 \
            -v \
            -nstlist 200 \
            -gpu_id 01234567 \
            -s topol.tpr

STMV Benchmark Instructions

STMV

Extract the binary system topology, parameter, coordinates, and velocity file.

cd .benchmarks/stmv
tar -xvf stmv.tar.gz
1 GPU/GCD
gmx mdrun -pin on \
            -nsteps 100000 \
            -resetstep 90000 \
            -ntmpi 1 \
            -ntomp 64 \
            -noconfout \
            -nb gpu \
            -bonded cpu \
            -pme gpu \
            -v \
            -nstlist 200 \
            -gpu_id 0 \
            -s topol.tpr
2 GPUs/GCDs
gmx mdrun -pin on \
            -nsteps 100000 \
            -resetstep 90000 \
            -ntmpi 8 \
            -ntomp 8 \
            -noconfout \
            -nb gpu \
            -bonded gpu \
            -pme gpu \
            -npme 1 \
            -v \
            -nstlist 200 \
            -gpu_id 01 \
            -s topol.tpr
4 GPUs/GCDs
gmx mdrun -pin on \
            -nsteps 100000 \
            -resetstep 90000 \
            -ntmpi 8 \
            -ntomp 8 \
            -noconfout \
            -nb gpu \
            -bonded gpu \
            -pme gpu \
            -npme 1 \
            -v \
            -nstlist 400 \
            -gpu_id 0123 \
            -s topol.tpr
8 GPUs/GCDs
gmx mdrun -pin on \
            -nsteps 100000 \
            -resetstep 90000 \
            -ntmpi 8 \
            -ntomp 8 \
            -noconfout \
            -nb gpu \
            -bonded gpu \
            -pme gpu \
            -npme 1 \
            -v \
            -nstlist 400 \
            -gpu_id 01234567 \
            -s topol.tpr

Examples With OpenMPI


ADH DODEC OpenMPI Benchmark Instructions

ADH DODEC OpenMPI

Extract the binary system topology, parameter, coordinates, and velocity file.

cd .benchmarks/adh_dodec
tar -xvf adh_dodec.tar.gz
1 GPU/GCD
mpirun -np 1 \
	gmx_mpi mdrun -pin on \
		-nsteps 100000 \
		-resetstep 90000 \
		-ntomp 64 \
		-noconfout \
		-nb gpu \
		-bonded cpu \
		-pme gpu \
		-v \
		-nstlist 100 \
		-gpu_id 0 \
		-s topol.tpr
2 GPUs/GCDs
mpirun -np 2 \ 
	gmx_mpi mdrun -pin on \
		-nsteps 100000 \
		-resetstep 90000 \
		-ntomp 32 \
		-noconfout \
		-nb gpu \
		-bonded gpu \
		-pme gpu \
		-npme 1 \
		-v \
		-nstlist 200 \
		-gpu_id 01 \
		-s topol.tpr
4 GPUs/GCDs
mpirun -np 4 \n	\ 
	gmx_mpi mdrun -pin on \
		-nsteps 100000 \
		-resetstep 90000 \
		-ntomp 16 \
		-noconfout \
		-nb gpu \
		-bonded gpu \
		-pme gpu \
		-npme 1 \
		-v \
		-nstlist 200 \
		-gpu_id 0123 \
		-s topol.tpr
8 GPUs/GCDs
mpirun -np 8 \ 
	gmx_mpi mdrun -pin on \
		-nsteps 100000 \
		-resetstep 90000 \
		-ntomp 8 \
		-noconfout \
		-nb gpu \
		-bonded gpu \
		-pme gpu \
		-npme 1 \
		-v \
		-nstlist 150 \
		-gpu_id 01234567 \
		-s topol.tpr

CELLULOSE NVE OpenMPI Benchmark Instructions

CELLULOSE NVE OpenMPI

Extract the binary system topology, parameter, coordinates, and velocity file.

cd .benchmarks/cellulose_nve
tar -xvf cellulose_nve.tar.gz
1 GPU/GCD
mpirun -np 1 \ 
	gmx_mpi mdrun -pin on \
		-nsteps 100000 \
		-resetstep 90000 \
		-ntomp 64 \
		-noconfout \
		-nb gpu \
		-bonded cpu \
		-pme gpu \
		-v \
		-nstlist 100 \
		-gpu_id 0 \
		-s topol.tpr
2 GPUs/GCDs
mpirun -np 2 \ 
	gmx_mpi mdrun -pin on \
		-nsteps 100000 \
		-resetstep 90000 \
		-ntomp 16 \
		-noconfout \
		-nb gpu \
		-bonded gpu \
		-pme gpu \
		-npme 1 \
		-v -nstlist 200 \
		-gpu_id 01 \
		-s topol.tpr
4 GPUs/GCDs
mpirun -np 4 \ 
	gmx_mpi mdrun -pin on \
		-nsteps 100000 \
		-resetstep 90000 \
		-ntomp 16 \
		-noconfout \
		-nb gpu \
		-bonded gpu \
		-pme gpu \
		-npme 1 \
		-v \
		-nstlist 200 \
		-gpu_id 0123 \
		-s topol.tpr
8 GPUs/GCDs
mpirun -np 8 \ 
	gmx_mpi mdrun -pin on \
		-nsteps 100000 \
		-resetstep 90000 \
		-ntomp 8 \
		-noconfout \
		-nb gpu \
		-bonded gpu \
		-pme gpu \
		-npme 1 \
		-v \
		-nstlist 200 \
		-gpu_id 01234567 \
		-s topol.tpr

STMV OpenMPI Benchmark Instructions

STMV OpenMPI

Extract the binary system topology, parameter, coordinates, and velocity file.

cd .benchmarks/stmv
tar -xvf stmv.tar.gz
1 GPU/GCD
mpirun -np 1 \ 
	gmx_mpi mdrun -pin on \
		-nsteps 100000 \
		-resetstep 90000 \
		-ntomp 64 \
		-noconfout \
		-nb gpu \
		-bonded cpu \
		-pme gpu \
		-v \
		-nstlist 200 \
		-gpu_id 0 \
		-s topol.tpr
2 GPUs/GCDs
mpirun -np 2 \ 
	gmx_mpi mdrun -pin on \
		-nsteps 100000 \
		-resetstep 90000 \
		-ntomp 8 \
		-noconfout \
		-nb gpu \
		-bonded gpu \
		-pme gpu \
		-npme 1 \
		-v \
		-nstlist 200 \
		-gpu_id 01 \
		-s topol.tpr
4 GPUs/GCDs
mpirun -np 4 \ 
	gmx_mpi mdrun -pin on \
		-nsteps 100000 \
		-resetstep 90000 \
		-ntomp 8 \
		-noconfout \
		-nb gpu \
		-bonded gpu \
		-pme gpu \
		-npme 1 \
		-v \
		-nstlist 400 \
		-gpu_id 0123 \
		-s topol.tpr
8 GPUs/GCDs
mpirun -np 8 \ 
	gmx_mpi mdrun -pin on \
		-nsteps 100000 \
		-resetstep 90000 \
		-ntomp 8 \
		-noconfout \
		-nb gpu \
		-bonded gpu \
		-pme gpu \
		-npme 1 \
		-v \
		-nstlist 400 \
		-gpu_id 01234567 \
		-s topol.tpr

Licensing Information

Your access and use of this application is subject to the terms of the applicable component-level license identified below. To the extent any subcomponent in this container requires an offer for corresponding source code, AMD hereby makes such an offer for corresponding source code form, which will be made available upon request. By accessing and using this application, you are agreeing to fully comply with the terms of this license. If you do not agree to the terms of this license, do not access or use this application.

The application is provided in a container image format that includes the following separate and independent components:

Package License URL
Ubuntu Creative Commons CC-BY-SA Version 3.0 UK License Ubuntu Legal
CMAKE OSI-approved BSD-3 clause CMake License
OpenMPI BSD 3-Clause OpenMPI License
OpenMPI Dependencies Licenses
OpenUCX BSD 3-Clause OpenUCX License
ROCm Custom/MIT/Apache V2.0/UIUC OSL ROCm Licensing Terms
Gromacs LGPL 2.1 Gromacs
Gromacs License

Additional third-party content in this container may be subject to additional licenses and restrictions. The components are licensed to you directly by the party that owns the content pursuant to the license terms included with such content and is not licensed to you by AMD. ALL THIRD-PARTY CONTENT IS MADE AVAILABLE BY AMD “AS IS” WITHOUT A WARRANTY OF ANY KIND. USE OF THE CONTAINER IS DONE AT YOUR SOLE DISCRETION AND UNDER NO CIRCUMSTANCES WILL AMD BE LIABLE TO YOU FOR ANY THIRD-PARTY CONTENT. YOU ASSUME ALL RISK AND ARE SOLELY RESPONSIBLE FOR ANY DAMAGES THAT MAY ARISE FROM YOUR USE OF THE CONTAINER.

The GROMACS source code and selected set of binary packages are available here: www.gromacs.org. GROMACS is Free Software, available under the GNU Lesser General Public License (LGPL), version 2.1. You can redistribute it and/or modify it under the terms of the LGPL as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version.

Disclaimer

The information contained herein is for informational purposes only, and is subject to change without notice. While every precaution has been taken in the preparation of this document, it may contain technical inaccuracies, omissions and typographical errors, and AMD is under no obligation to update or otherwise correct this information. Advanced Micro Devices, Inc. makes no representations or warranties with respect to the accuracy or completeness of the contents of this document, and assumes no liability of any kind, including the implied warranties of noninfringement, merchantability or fitness for particular purposes, with respect to the operation or use of AMD hardware, software or other products described herein. No license, including implied or arising by estoppel, to any intellectual property rights is granted by this document. Terms and limitations applicable to the purchase or use of AMD’s products are as set forth in a signed agreement between the parties or in AMD's Standard Terms and Conditions of Sale. AMD, the AMD Arrow logo and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies.

Notices and Attribution

© 2021-2024 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, Instinct, Radeon Instinct, ROCm, and combinations thereof are trademarks of Advanced Micro Devices, Inc.

Docker and the Docker logo are trademarks or registered trademarks of Docker, Inc. in the United States and/or other countries. Docker, Inc. and other parties may also have trademark rights in other terms used herein. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.

All other trademarks and copyrights are property of their respective owners and are only mentioned for informative purposes.