Skip to content

Commit

Permalink
rename gpu options, update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
ajaypanyala committed May 3, 2024
1 parent e65360c commit 630165c
Show file tree
Hide file tree
Showing 9 changed files with 93 additions and 72 deletions.
18 changes: 9 additions & 9 deletions .github/workflows/c-cpp.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ jobs:
strategy:
matrix:
os:
- [self-hosted, ubuntu18]
- [self-hosted, ubuntu22]
- [self-hosted, macos]
backend:
- ga
Expand All @@ -36,31 +36,31 @@ jobs:
use_scalapack:
- no-scalapack
include:
- os: [self-hosted, ubuntu18]
- os: [self-hosted, ubuntu22]
mpi_impl: openmpi
cxx: clang++
cc: clang
fc: gfortran
backend: ga
use_cuda: no-cuda
use_scalapack: no-scalapack
- os: [self-hosted, ubuntu18]
- os: [self-hosted, ubuntu22]
mpi_impl: openmpi
cxx: g++
cc: gcc
fc: gfortran
backend: ga
use_cuda: cuda
use_scalapack: no-scalapack
- os: [self-hosted, ubuntu18]
- os: [self-hosted, ubuntu22]
mpi_impl: openmpi
cxx: g++
cc: gcc
fc: gfortran
backend: upcxx
use_cuda: cuda
use_scalapack: no-scalapack
- os: [self-hosted, ubuntu18]
- os: [self-hosted, ubuntu22]
mpi_impl: openmpi
cxx: g++
cc: gcc
Expand Down Expand Up @@ -110,7 +110,7 @@ jobs:
shell: bash

- name: Set cache path linux
if: ${{ matrix.os[1] == 'ubuntu18' }}
if: ${{ matrix.os[1] == 'ubuntu22' }}
id: set-cache-path-linux
run: |
echo "exachem_cache_path=/hpc/software/CI/cache/exachem_cache" >> $GITHUB_ENV
Expand Down Expand Up @@ -139,7 +139,7 @@ jobs:
echo "USE_SCALAPACK=ON" >> $GITHUB_ENV
- name: load llvm
if: ${{ matrix.cc == 'clang' && matrix.cxx == 'clang++' && matrix.os[1] == 'ubuntu18' }}
if: ${{ matrix.cc == 'clang' && matrix.cxx == 'clang++' && matrix.os[1] == 'ubuntu22' }}
shell: bash
run: |
module load llvm/11.1.0
Expand Down Expand Up @@ -220,7 +220,7 @@ jobs:
# TAMM build
git clone https://github.com/NWChemEx/TAMM $GITHUB_WORKSPACE/TAMM
cd $GITHUB_WORKSPACE/TAMM
cmake -H. -Bbuild -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_PATH }} -DLINALG_VENDOR=${{ env.LA_VENDOR }} -DGPU_ARCH=70 -DMODULES="CC" -DUSE_CUDA=${{ env.USE_CUDA }} -DUSE_SCALAPACK=${{ env.USE_SCALAPACK }}
cmake -H. -Bbuild -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_PATH }} -DLINALG_VENDOR=${{ env.LA_VENDOR }} -DGPU_ARCH=70 -DMODULES="CC" -DTAMM_ENABLE_CUDA=${{ env.USE_CUDA }} -DUSE_SCALAPACK=${{ env.USE_SCALAPACK }}
cd build
make -j${{ env.EC_NPROC }}
make install
Expand Down Expand Up @@ -250,7 +250,7 @@ jobs:
# TAMM build
git clone https://github.com/NWChemEx/TAMM $GITHUB_WORKSPACE/TAMM
cd $GITHUB_WORKSPACE/TAMM
UPCXX_CODEMODE=O3 CXX=upcxx cmake -H. -Bbuild -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_PATH }} -DGPU_ARCH=70 -DMODULES="CC" -DUSE_UPCXX=ON -DMPIRUN_EXECUTABLE=${{ env.CI_MPIEXEC }}
UPCXX_CODEMODE=O3 CXX=upcxx cmake -H. -Bbuild -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_PATH }} -DGPU_ARCH=70 -DMODULES="CC" -DTAMM_ENABLE_CUDA=${{ env.USE_CUDA }} -DUSE_UPCXX=ON -DMPIRUN_EXECUTABLE=${{ env.CI_MPIEXEC }}
cd build
UPCXX_NETWORK=smp UPCXX_CODEMODE=O3 make -j${{ env.EC_NPROC }}
UPCXX_NETWORK=smp UPCXX_CODEMODE=O3 make install
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@

## Build

Build instructions are available [here](docs/install.md)
Build instructions are available [here](https://exachem.readthedocs.io/en/latest/install.html)

## ExaChem Citation
#### Please cite the following reference when publishing results obtained with ExaChem.
Expand Down
1 change: 1 addition & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ ExaChem Documentation
:caption: Contents:

introduction
install
user_guide/user
developer_guide/developer

Expand Down
42 changes: 0 additions & 42 deletions docs/install.md

This file was deleted.

62 changes: 62 additions & 0 deletions docs/install.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@

The installation instructions for this repository are the same as those
for the `TAMM Library <https://github.com/NWChemEx/TAMM>`__.

Installation
============

- `Software
Requirements <https://tamm.readthedocs.io/en/latest/prerequisites.html>`__

- `Build
Instructions <https://tamm.readthedocs.io/en/latest/install.html>`__

Dependencies
------------

In addition to the TAMM `dependencies <https://tamm.readthedocs.io/en/latest/install.html>`__, the following ExaChem dependencies are also automatically built by TAMM.

* Libint
* Libecpint

Build instructions for a quick start
------------------------------------

Step 1

::

git clone https://github.com/NWChemEx/TAMM.git
cd TAMM && mkdir build && cd build

- .. rubric:: A detailed list of the cmake build options available are
listed
`here <https://tamm.readthedocs.io/en/latest/install.html>`__
:name: a-detailed-list-of-the-cmake-build-options-available-are-listed-here

::

CC=gcc CXX=g++ FC=gfortran cmake -DCMAKE_INSTALL_PREFIX=<exachem-install-path> -DMODULES="CC;DFT" ..
make -j4 install

Step 2

::

git clone https://github.com/ExaChem/exachem.git
cd exachem && mkdir build && cd build
CC=gcc CXX=g++ FC=gfortran cmake -DCMAKE_INSTALL_PREFIX=<exachem-install-path> -DMODULES="CC;DFT" ..
make -j4

``NOTE:`` The cmake configure line in Steps 1 and 2 should be the same.


Running the code
----------------

::

export OMP_NUM_THREADS=1
export INPUT_FILE=$REPO_ROOT_PATH/inputs/ozone.json

mpirun -n 3 $REPO_INSTALL_PATH/bin/ExaChem $INPUT_FILE
4 changes: 1 addition & 3 deletions docs/user_guide/cholesky_decomposition.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,4 @@ Options used in the Cholesky decomposition of atomic-orbital based two-electron

The following options are applicable only for calculations involving :math:`\geq` 1000 basis functions. They are used for restarting the cholesky decomposition procedure.

:write_cv: ``[default=[false,5000]]`` When enabled, it performs parallel IO to write the tensor containing the AO cholesky vectors to disk. Enabling this option implies restart.
The integer represents a count, indicating that the Cholesky vectors should be written to disk after every *count* vectors are computed.

:write_cv: ``[default=[false,5000]]`` When enabled, it performs parallel IO to write the tensor containing the AO cholesky vectors to disk. Enabling this option implies restart. The integer represents a count, indicating that the Cholesky vectors should be written to disk after every *count* vectors are computed.
6 changes: 3 additions & 3 deletions exachem/cc/ccsd_t/ccsd_t.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -11,20 +11,20 @@ set(CCSD_T_COMMON_SRCS
${CCSD_T_SRCDIR}/fused_common.hpp
)

if(USE_CUDA)
if(TAMM_HAS_CUDA)
set(CCSD_T_SRCS ${CCSD_T_COMMON_SRCS}
${CCSD_T_SRCDIR}/ccsd_t_all_fused.hpp
${CCSD_T_SRCDIR}/ccsd_t_all_fused_gpu.cu
${CCSD_T_SRCDIR}/ccsd_t_all_fused_nontcCuda_Hip_Sycl.cpp)

set_source_files_properties(${CCSD_T_SRCDIR}/ccsd_t_all_fused_nontcCuda_Hip_Sycl.cpp PROPERTIES LANGUAGE CUDA)
elseif(USE_HIP)
elseif(TAMM_HAS_HIP)
set(CCSD_T_SRCS ${CCSD_T_COMMON_SRCS}
${CCSD_T_SRCDIR}/ccsd_t_all_fused.hpp
${CCSD_T_SRCDIR}/ccsd_t_all_fused_nontcCuda_Hip_Sycl.cpp)

set_source_files_properties(${CCSD_T_SRCDIR}/ccsd_t_all_fused_nontcCuda_Hip_Sycl.cpp PROPERTIES LANGUAGE HIP)
elseif(USE_DPCPP)
elseif(TAMM_HAS_DPCPP)
set(CCSD_T_SRCS ${CCSD_T_COMMON_SRCS}
${CCSD_T_SRCDIR}/ccsd_t_all_fused.hpp
${CCSD_T_SRCDIR}/ccsd_t_all_fused_nontcCuda_Hip_Sycl.cpp)
Expand Down
15 changes: 8 additions & 7 deletions support/spack/packages/exachem/package.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
Expand All @@ -20,22 +20,23 @@ class Exachem(CMakePackage,CudaPackage):
depends_on('mpi')
depends_on('intel-oneapi-mkl +cluster')
depends_on('[email protected]:')
depends_on('cuda@11.5:', when='+cuda')
depends_on('cuda@11.8:', when='+cuda')
depends_on('hdf5 +mpi')
# Still need to update libint recipe for 2.7.x
#depends_on('[email protected]:')
# Still need to update libint recipe for 2.9.x
#depends_on('[email protected]:')
conflicts("+cuda", when="cuda_arch=none")

def cmake_args(self):
args = [
# This was not able to detect presence of libint in first test
#'-DLibInt2_ROOT=%s' % self.spec['libint'].prefix,
'-DMODULES=CC',
'-DMODULES=CC;DFT',
'-DHDF5_ROOT=%s' % self.spec['hdf5'].prefix,
'-DLINALG_VENDOR=IntelMKL',
'-DLINALG_PREFIX=%s' % join_path(self.spec['intel-oneapi-mkl'].prefix, 'mkl', 'latest'),
]
if '+cuda' in self.spec:
args.extend([ '-DUSE_CUDA=ON',
])
args.append( "-DTAMM_ENABLE_CUDA=ON" )
args.append("-DGPU_ARCH=" + self.spec.variants["cuda_arch"].value)

return args
15 changes: 8 additions & 7 deletions support/spack/packages/tamm/package.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
Expand All @@ -23,22 +23,23 @@ class Tamm(CMakePackage,CudaPackage):
depends_on('mpi')
depends_on('intel-oneapi-mkl +cluster')
depends_on('[email protected]:')
depends_on('cuda@11.5:', when='+cuda')
depends_on('cuda@11.8:', when='+cuda')
depends_on('hdf5 +mpi')
# Still need to update libint recipe for 2.7.x
#depends_on('[email protected]:')
# Still need to update libint recipe for 2.9.x
#depends_on('[email protected]:')
conflicts("+cuda", when="cuda_arch=none")

def cmake_args(self):
args = [
# This was not able to detect presence of libint in first test
#'-DLibInt2_ROOT=%s' % self.spec['libint'].prefix,
'-DMODULES=CC',
'-DMODULES=CC;DFT',
'-DHDF5_ROOT=%s' % self.spec['hdf5'].prefix,
'-DLINALG_VENDOR=IntelMKL',
'-DLINALG_PREFIX=%s' % join_path(self.spec['intel-oneapi-mkl'].prefix, 'mkl', 'latest'),
]
if '+cuda' in self.spec:
args.extend([ '-DUSE_CUDA=ON',
])
args.append( "-DTAMM_ENABLE_CUDA=ON" )
args.append("-DGPU_ARCH=" + self.spec.variants["cuda_arch"].value)

return args

0 comments on commit 630165c

Please sign in to comment.