Skip to content

Logging into argon to access our server and submit jobs

Michelle Voss edited this page Mar 17, 2021 · 9 revisions

Introduction to our HPC Argon

"Argon" is our high performance computer (HPC) that allows us to run software using clusters of computers linked together. This allows us to have access to a much more powerful computer than we could afford to have for each individual in the lab. Nevertheless each of us has access to it!

  • Video introduction to hpc concept and terms broadly

Others use this resource as well, so think of argon as a community of scientists using a community of computers. Learning the expectations for use is part of being a good citizen in the local HPC community:)

  • How common is it for neuroimaging labs to utilize HPCs? It's becoming more common. For an overview of different lab computing models for neuroimaging or computing intensive work generally, see here.

Argon resources:

Steps to log in:

  1. If you're off campus, you'll need to be logged into VPN
  2. It's often helpful at first to also have our lss server (://itf-rs-store15.hpc.uiowa.edu/vosslabhpc/)mapped to your local drive. This allows you to see the filesystem you're accessing through argon in the terminal.
  3. On your local computer, open a shell terminal
  4. At prompt, type ssh [email protected]
  • two-stage security step
  • information & links about argon usage
  1. Get to know where you are by typing pwd to see present working directory and ls to list contents of the directory
  2. Access our server by going to /Shared/vosslabhpc/: type/paste cd /Shared/vosslabhpc/

You can now run software on data stored on our server, using the computing power of the hpc! But before you can do that, you need to learn a little more about how software is run on a computing cluster. Because it's a shared community, it works differently than running software interactively on your local computer. The basic components you need are (1) concept of a "job" submission, which is submitting what you want to do in a shell script to the cluster, and (2) having commands in that shell script specify parameters of how much computing power you need, who to contact when issues come up, and having the commands access software you need even though they're running inside the cluster ecosystem.

Jobs:

#!/bin/bash

#$ -pe smp 16
#$ -q UI
#$ -m bea
#$ -M [email protected]
#$ -o /Shared/vosslabhpc/Projects/Bike_ATrain/Imaging/BIDS/derivatives/breathold/code/job_logs/out/
#$ -e /Shared/vosslabhpc/Projects/Bike_ATrain/Imaging/BIDS/derivatives/breathold/code/job_logs/err/
OMP_NUM_THREADS=8

singularity run --cleanenv /Shared/vosslabhpc/UniversalSoftware/SingularityContainers/fsl-v6.0.1.sif \
feat /Shared/vosslabhpc/Projects/Bike_ATrain/Imaging/BIDS/derivatives/breathold/sub-TIV157/ses-pre/sub-TIV157.fsf
#!/bin/sh

#$ -pe smp 16
#$ -q UI
#$ -m bea
#$ -M [email protected]
#$ -o /Shared/vosslabhpc/Projects/CourseData/ds003030/derivatives/code/mriqc/out
#$ -e /Shared/vosslabhpc/Projects/CourseData/ds003030/derivatives/code/mriqc/err
OMP_NUM_THREADS=10
singularity run -H ${HOME}/singularity_home -B /Shared/vosslabhpc:/mnt \
/Shared/vosslabhpc/UniversalSoftware/SingularityContainers/mriqc-v0.16.1.sif \
/mnt/Projects/CourseData/ds003030/ /mnt/Projects/CourseData/ds003030/derivatives/mriqc_v0.16.1 \
participant --participant_label 01 \
-w /nfsscratch/Users/mwvoss/work/CourseDataMRIqc \
--n_procs 10 --mem_gb 35 --write-graph \
--fd_thres 0.2 --start-idx 4

#!/bin/bash

#$ -pe smp 16
#$ -q UI
#$ -m bea
#$ -M [email protected]
#$ -o /Shared/vosslabhpc/Projects/AMBI/3-Experiment/2-Data/Imaging/BIDS/derivatives/code/fmriprep_v20.2.0/out
#$ -e /Shared/vosslabhpc/Projects/AMBI/3-Experiment/2-Data/Imaging/BIDS/derivatives/code/fmriprep_v20.2.0/err
OMP_NUM_THREADS=10
singularity run --cleanenv -H ${HOME}/singularity_home -B /Shared/vosslabhpc:/mnt \
/Shared/vosslabhpc/UniversalSoftware/SingularityContainers/fmriprep-v20.2.0.sif \
/mnt/Projects/AMBI/3-Experiment/2-Data/Imaging/BIDS/ /mnt/Projects/AMBI/3-Experiment/2-Data/Imaging/BIDS/derivatives/fmriprep_v20.2.0 \
--skip-bids-validation \
participant --participant_label 001 \
--bids-filter-file /Shared/vosslabhpc/Projects/AMBI/3-Experiment/2-Data/Imaging/BIDS/derivatives/code/fmriprep_v20.2.0/job_scripts/TEMPLATE-filter.json \
-w /nfsscratch/Users/mwvoss/work/fmriprep_AMBI_v20.2 \
--write-graph --mem_mb 35000 --omp-nthreads 10 --nthreads 16 --output-spaces {T1w,MNI152NLin2009cAsym,fsaverage5} --cifti-output --use-aroma \
--fs-license-file /mnt/UniversalSoftware/freesurfer_license.txt

Example filter file:

{
    "t1w": {
        "datatype": "anat",
        "acquisition": null,
        "suffix": "T1w"
    },
    "bold": {
        "datatype": "func",
        "suffix": "bold"
    }
}

Submitting jobs

  • submit a job with command qsub jofilename.job
  • you will get an email if you specified that option
  • check if running on argon with command qstat -u hawkid

What is singularity in these job files?

  • Singularity is a software for access to pre-built software "containers" which contain a stable environment for access to software and its dependencies, while giving access to read/write files outside the container
  • Introductory videos on singularity containers, building them, and interacting with them