-
Notifications
You must be signed in to change notification settings - Fork 14
Home
This repository contains a software framework for reproducible machine learning experiments on automatic classification of Alzheimer's disease (AD) using multimodal MRI and PET data from three publicly available datasets ADNI, AIBL, OASIS. It is developed by the ARAMIS Lab.
In recent years, the number of papers on Alzheimer's disease classification has increased dramatically, generating interesting methodological ideas on the use machine learning and feature extraction methods. However, practical impact is much more limited and, eventually, it is very difficult to tell which of these approaches are the most efficient. While the vast majority these works make use of ADNI an objective comparison between approaches is impossible due to variations in the subjects included, image pre-processing, performance metrics and cross-validation procedures. Here, we propose a framework for reproducible classification experiments using multimodal MRI and PET data from ADNI, AIBL and OASIS. The core components are:
- code to automatically convert the full ADNI/AIBL/OASIS databases into BIDS format;
- a modular architecture based on Nipype in order to easily plug-in different classification and feature extraction tools;
- feature extraction pipelines for anatomical MRI, diffusion MRI and FDG-PET data;
- baseline classification approaches for unimodal and multimodal features
This code relies heavily on the Clinica software platform that you will need to install.
If you use this software, please cite:
J. Samper-Gonzalez, N. Burgos, S. Bottani, S. Fontanella, P. Lu, A. Marcoux, A. Routier, J. Guillon, M. Bacci, J. Wen, A. Bertrand, H. Bertin, M.-O. Habert, S. Durrleman, T. Evgeniou and O. Colliot, Reproducible evaluation of classification methods in Alzheimer's disease: Framework and application to MRI and PET data. NeuroImage, 183:504–521, 2018 doi:10.1016/j.neuroimage.2018.08.042 - Paper in PDF - Supplementary material
In addition, if you use Diffusion MRI data or related code, please cite:
J. Wen, J. Samper-Gonzalez, S. Bottani, A. Routier, N. Burgos, T. Jacquemont, S. Fontanella, S. Durrleman, S. Epelbaum, A. Bertrand, and O. Colliot, Reproducible evaluation of diffusion MRI features for automatic classification of patients with Alzheimer’s disease. Submitted for publication
The step-by-step instructions/scripts are located in the Generic-Version
folder.
Since most of the developed code has been integrated into Clinica software platform you will find that, once Clinica has been installed, it takes only a few commands to run all the experiments.
Note that this will run the experiments using the latest versions of Clinica and of the datasets, which are thus more advanced than those of the published papers. If you want to reproduce the experiments as they were in the papers, you will find the instructions/code under the Paper-Specific-Versions
folders.
There are four mains steps:
1- ADNI/AIBL/OASIS conversion: In a terminal you need to use the command clinica convert adni-to-bids
(or aibl-to-bids/oasis-to-bids
. You have to provide as inputs the directory containing the downloaded images, the directory containing all the CSV files provided, and the directory that will contain the output dataset in BIDS format. It is important that you do not modify the downloaded data.
2- Create subjects lists: Here we provide a Python script that you have to personalize so that adni_bids
and subjects_path
variables point to your BIDS directory created at the previous step and to a subject directory that will contain the lists of subjects. You will obtain our default subject lists. The first list will contain all the subjects with a T1 image for the baseline visit while the second one will have all the subjects with both T1 and FDG PET scans also at baseline. In a second step, the lists containing the classes (diagnoses) that will be used for classification are generated.
3- Preprocessing pipelines: In this text file you can find the Clinica commands that need to be executed for preprocessing images.
For T1 images, it will execute clinica run t1-volume
. For that, you need to provide the BIDS directory, the CAPS directory that will contain your output, a name to identify your group of subjects and the TSV file containing the list of subjects with a T1 image for the baseline (obtained at the previous step).