Skip to content

Automated Preprocessing Pipeline for Fetal Resting-State fMRI Data

License

Notifications You must be signed in to change notification settings

saigerutherford/fetal-code

Repository files navigation

Automated Brain Masking of Fetal Functional MRI Data

Preprint: https://www.biorxiv.org/content/early/2019/01/21/525386

Abstract: Fetal resting-state functional magnetic resonance imaging (rs-fMRI) has emerged as a critical new approach for characterizing brain development before birth. Despite rapid and widespread growth of this approach, at present we lack neuroimaging processing pipelines suited to address the unique challenges inherent in this data type. Here, we solve the most challenging processing step, rapid and accurate isolation of the fetal brain from surrounding tissue across thousands of non-stationary 3D brain volumes. Leveraging our library of 1,241 manually traced fetal fMRI images from 207 fetuses, we trained a Convolutional Neural Network (CNN) that achieved excellent performance across two held-out test sets from separate scanners and populations. Furthermore, we unite the auto-masking model with additional fMRI preprocessing steps from existing software and provide insight into our adaptation of each step. This work represents an initial advancement towards a fully comprehensive, open source workflow for fetal functional MRI data preprocessing.

Example of a very successful auto-mask:

Example of a failed auto-mask:

Motion...needs to be talked about

Primary data used for pipeline development were acquired at Wayne State University School of Medicine during the course of projects supported by National Institutes of Health (NIH) awards MH110793 and ES026022.

For access to raw fetal functional time-series data and training/validation/test mask used in the development of this code please contact Moriah Thomason [email protected]

Repository organization

ISDP Google Collab Link: https://colab.research.google.com/drive/10bBTVpCKQeR207hFhvH6F22h-p1edl5v

fetal_mask_tutorial.ipynb is a Google colab notebook used during ISDP 2019 pre-conference workshop. This is an example of running the code using Google's resources. I also introduce/provide links to BioImageSuite web, a useful tool (in your web browser )for viewing data, quality checking, and editing masks. https://bioimagesuiteweb.github.io/webapp/viewer.html#

checkpoints --> contains the saved models. 2018-06-07_14:07 is the model trained using train, validation, and test split (129, 20, 48 subjects; 855, 102, 211 volumes) 2018-06-08_10:47 is the model trained on all labeled WSU data and tested on Yale data.

code --> this directiory contains all necessary scripts for running the pretrained model (createMasks.py), or training your own model (buildModel.py and trainModel.py).
code/FullFetalPreprocessPipeline.sh --> Example pipeline using auto-mask and FSL. s01_prep.sh shows how to zeropad/resample images to get ready to be auto-masked. s02_automask.sh is an example of how to activate virtual environment and run get masks using the pre-trained model. s03_postprocessmasks.sh shows how to threshold/binarize automasks, resample back in native space, and apply the mask to extract the brain. s04_realign_normalize.sh shows the FSL commands used to realign & normalize. These scripts are meant to be template scripts/examples. They have not been fully tested or setup without hard-coded paths.

figures --> Jupyter notebook(s) used to make the figures in the manuscript.

summaries --> Contains the summaries for both models described above (in the checkpoints directory description) that can be viewed using tensorboard. 10/17/19 update: tensorboard is not correctly loading the summaries. tensorboard --logdir=summaries/model_name

Installation & Requirements

Required libraries: For running on Mac CPU --> CPU_Mac_Requirements.txt (note: some of these libraries are probably unnecessary if you do not use Jupyter) For running on Linux using GPU --> GPU_Linux_Requirements.txt (note: tensorflow_gpu==1.11 requires CUDA 9.0 see https://www.tensorflow.org/install/gpu for more details) For running in Google collab notebook --> Collab_Requirements.txt

Necessary data prep steps (prior to auto-masking)

Input data must be of dimensions 96 x 96 x N. The data used in training was resampled to voxel size 3.5 mm^3, then zero padded to 96 x 96 x 37. See s01_prep.sh for an example of preprocessing commands.

Running the auto-mask code options

Use pre-trained model

Images should be in 3D volume format (split the 4D time series into individual volumes). File naming should be consistent. Currently, the code expects images to be in a folder called "images/" and named as "zpr_SubjectID_runID_vol0000.nii". createMask.py is the script to run to create new masks using the model trained with all hand drawn brain masks. In order to use the correct pre-trained model, make sure that line 152 of trainModel.py is set as follows: main(train=False, timeString='2018-06-08_10:47') You will also need to edit lines 57-62 of createMasks.py in order to match the path to the directory of where your data lives. You can also specify (in createMasks.py lines 64-67) a single file to be masked if you are not masking multiple volumes/subjects.

Train model on your data

Instructions for training model on new data coming soon.

Other preprocessing steps

  1. Merge auto-masks into 4D file to view as overlay on data in order to quality check the masks.
  2. Cluster and binarize the probability masks
  3. Quality check binarized mask
  4. Resample mask back into subject's native space
  5. Apply binarized native space brain mask to raw data to extract the fetal brain and discard other tissues
  6. Quality check
  7. Realign time series and identify usable low motion volumes
  8. Quality Check
  9. Normalize to age-matched fetal template
  10. Quality check

About

Automated Preprocessing Pipeline for Fetal Resting-State fMRI Data

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published