-
Notifications
You must be signed in to change notification settings - Fork 630
Using the Oak Ridge Leadership Computing Facility
These notes provide some basic information about using the computer called titan at Oak Ridge National Labs. It is assumed you have an account and have logged in for the first time. Here are the basics for compiling and running an FDS job:
-
Clone the firemodels/fds-smv repo as you would on any linux cluster. Use these notes. Follow GitHub's instructions for generating ssh keys.
-
Add the line
module swap PrgEnv-pgi PrgEnv-intel
to the
.bashrc
file. This will change the compiling environment from the default (PGI) to Intel. Source your.bashrc
.$ source .bashrc
-
Modify the FDS
makefile
entry as follows:mpi_intel_linux_64 : FFLAGS = -m64 -O2 -ipo -traceback $(GITINFO) mpi_intel_linux_64 : LFLAGS = mpi_intel_linux_64 : FCOMPL = ftn mpi_intel_linux_64 : FOPENMPFLAGS = mpi_intel_linux_64 : obj = fds_mpi_intel_linux_64 mpi_intel_linux_64 : setup $(obj_mpi) $(FCOMPL) $(FFLAGS) $(LFLAGS) -o $(obj) $(obj_mpi)
Note that
mpifort
is replaced byftn
and the OpenMP flags are gone (just to avoid bothering with it). No need to worry about Infiniband. -
In the directory
mpi_intel_linux_64
, modify themake_fds.sh
file as follows:#!/bin/bash dir=`pwd` target=${dir##*/} echo Building $target make -j4 VPATH="../../FDS_Source" -f ../makefile $target
There is no need for any path or environment variables pointing to the compiler or MPI app.
-
To run jobs, prepare a PBS script like this one (in this example, the name of the script is
job_name_script
):#!/bin/bash #PBS -A CMB115 #PBS -N job_name #PBS -e /ccs/home/mcgratta/FDS-SMV/.../job_name.err #PBS -o /ccs/home/mcgratta/FDS-SMV/.../job_name.log #PBS -l nodes=2 #PBS -l walltime=2:0:0 cd $MEMBERWORK/cmb115 aprun -n 32 /ccs/home/mcgratta/FDS-SMV/FDS_Compilation/mpi_intel_linux_64/fds_mpi_intel_linux_64 job_name.fds
The
-A
option is your project code, used (I assume) for accounting purposes. Thewalltime
is required, and you will be told immediately upon submitting the script if a time is not allowed. All jobs must be run from $MEMBERWORK/cmb115, which is your assigned work space. Each node on Titan has 16 cores, hence to run a job with 32 MPI processes, you need to request 2 nodes. -
Run the job by submitting the script:
$ qsub job_name_script
-
Fetch results using sftp to the same address that you logged into using ssh. Use
mget
to grab multiple files and bring them back to your computer.