SPD domain-specific batch normalization to crack interpretable unsupervised domain adaptation in EEG
This repository contains code and data accompanying the NeurIPS 2022 publication with the title SPD domain-specific batch normalization to crack interpretable unsupervised domain adaptation in EEG. [preprint], [publication].
All dependencies are managed with the conda
package manager.
Please follow the user guide to install conda
.
Once the setup is completed, the dependencies can be installed in a new virtual environment:
conda env create --file environment.yaml --prefix ./venv
Currently 5 public EEG BCI datasets are supported: BNCI2014001, BNCI2015001, Lee2019, Stieger2021 and Hinss2021.
The moabb and mne packages are used to download and preprocess these datasets.
Notice: there is no need to manually download and preprocess the datasets. This is done automatically on the fly; datasets will be downloaded into the directory ~/mne_data
, unless the environment variable MNE_DATA
is set and pointing to another directory.
To make sure that the correct conda environment is activated and the working directory is set properly, run this command:
conda activate ./venv
cd experiments
To train and evaluate the proposed model (i.e., SPD domain-specific momentum batch normalization (SPDDSMBN)) in the inter-session TL scenario with a specific dataset, run this command:
python main.py dataset=<bnci2014001|bnci2015001|lee2019|stieger2021|hinss2021>
For the inter-subject TL scenario run:
python main.py evaluation=inter-subject+uda dataset=<bnci2014001|bnci2015001|lee2019|stieger2021|hinss2021>
Note that the hydra package is used to manage configuration files. So, hydra's override CLI syntax can be used to modify the configuration.
To run all the experiments with the public EEG datasets, run this command:
./run_experiments.sh
This can take quite some time, because the script loops over datasets, models (including SPDDSMBN) and the evaluation scenarios. Note that the computed results will overwrite the pre-computed results, shipped within this package.
To generate the figures and tables of the paper, the distributed or pre-computed models/results can be used. To re-compute the figures run these scripts
Figure | Command |
---|---|
Figure 1 | python figure1.py |
Figure 2 | python figure2.py |
Figure 3 | python figure3.py |
To list the dataset specific results and summarize the ablation study (Table 1), run:
python tables.py