The FORBOW Brain Study Manual is broken down below into five major sections:
- Background info: structure of the brain study.
- Before the scan: outlines how to identify and schedule participants.
- During the scan: steps to be executed at the time of scanning.
- After the scan: practices and procedures to complete when scanning is done.
- 1-4 covered at this hyperlink (request access if not visible).
- Preprocessing and PBIL: the focus of this section.
(typically 4 hours per subject)
In this step we will run the Human Connectome Project Minimal Preprocessing pipelines, which are documented in detail here, and here (and naturally on our computers). They have been modified to work with our scanner. In essence, these pipelines are a series of scripts that use programs such as FreeSurfer, FSL and other tools to clean, register, segment, and generally process the data in accordance to best methods currently available.
This step relies on completion of all previous steps in this document.
- Open the terminal.
- Navigate to the
_scriptsfolder in /shared/uher/FORBOW_Brain/neuro_data/Biotic3T/ - Drag the
1_run_hcp_all.shscript into the terminal
- alternatively, cd into the
_scriptsfolder and typerunand click tab to autocomplete then type.hit tab to autocomplete again.
- In terminal, after the script path, type subject IDs that need to be preprocessed (space separated), for example
032_A 033_A 031_B - Click enter to run script on specified subjects.
In order to run the participants in parallel and staggered you can use a different command (see example below).
./\_1\_scripts/parallel_stagger.sh -j 4 -d 600 -s /shared/uher/FORBOW/analysis/\_1\_scripts/1\_run\_hcp\_all.sh ::: 031\_B\_NP 031\_B\_FLAIR 035\_A\_NP 035\_A\_FLAIR 016\__NP 016\_C\_FLAIR 011\_D\_NP 011\_D\_FLAIR
Where the parallel command forces the scripts to run on different CPU cores and the -j followed by a number specifies the number of jobs (should be proportional to CPU cores and RAM available - but not 100% this way as some of the scripts it runs in turn run their own parallel processing).
There might be times when you might want to run the scripts separately rather than relying on the run all hcp script above. In order to do that:
- cd into the
logfolder of the to-be-run participant's directory- this is so that the output of the scripts gets saved in the log directory for that participant
- then drag the appropriate script into terminal e.g.
HCP_1_PreFreeSurferPipelineBatch.sh - specify subjectID, better to do one per tab/terminal to keep track
- involves adding
--Subjlist=followed by the subject ID
- involves adding
- keep track of the scripts run with the
track and QC spreadsheet
A note about runtimes
HCP_1_PreFreeSurferPipelineBatch.sh | ~ 1 hour
After the 1_run_hcp_all.sh script completes, check the log folder for each individual for errors.
- For simplicity, one main script wraps subject-level work-flow scripts to completely process a new dataset:
./_1_scripts/NIHPD_RUN_Analysis_Pipeline.sh <SSID_SESS>
If all ran perfectly then following output is created:
./SSID_SESS_FLAIR/
DWI/
logs/
MNI-Nonlinear/
Myelin/
T1w/
SID_SESS_FLAIR/
label/
mri/
scripts/
stats/
surf/
T2w/
unprocessed/
- And this dataset can then be included when calling the master group reporting script:
./NIHDP_report_values2csv_HCP_WideFormat.sh $(ls -d ???_?_FLAIR)
Processed data needs to undergo automated QC with the Qoala-T tool: GitHub Repo, journal article
- run Qoala-T script [to be linked here]
- upload
aseg_stats.txtand left and rightaparc_area_[hemi].txtandaparc_thickness_[hemi].txtto Qoala-T Shiny app - click
Execute Qoala-T predictionsand save locally before uploading to sync or github
- Upload to Sync
- Enter password - also easier to remember link: http://bit.ly/VladData
- Tabular data can now be uploaded here, by either clicking upload or dragging the files over.
- Contents not visible, but if upload shows 100% then they've been added