Official implementation of "On the Role of Low-Level Visual Features in EEG-Based Image Reconstruction" .
conda env create -f environment.yml
conda activate BCI
pip install wandb
pip install einops
pip install open_clip_torch
pip install transformers==4.28.0.dev0
pip install diffusers==0.24.0
pip install braindecode==0.8.1
In this study, we directly used the preprocessed EEG data and the VAE latents provided by Li et al., which can be downloaded on their Huggingface. The raw visual stimuli can be downloaded on OSF.
After downloading, your data directories should look like:
EEG_data/
├── sub-01/
│ ├── preprocessed_eeg_training.npy
│ ├── preprocessed_eeg_test.npy
├── sub-02/
│ ├── preprocessed_eeg_training.npy
│ ├── preprocessed_eeg_test.npy
# First modify the Config file to speficy data folders
vi data_config.json
# Train the stage-1 high-level models for the 10 subjects
bash EEG_stage1_highlevel.sh --gpu 0 --data_path [your EEG path]
# Train the stage-2 diffusion models for the 10 subjects
bash EEG_stage2_highlevel.sh --gpu 0 --data_path [your EEG path] --save_model
bash EEG_stage1_lowlevel.sh --gpu 0 --data_path [your EEG path] --save_model
These scripts will create csv files that store the configuration and the metric values across models and subjects. And the first 30 reconstructions will also be saved.
# Low-level reconstruction
bash EEG_lowlevel_metrics.sh
# High-level reconstruction
bash EEG_highlevel_metrics.sh
# Two-level reconstruction
bash EEG_final_metrics.sh
This project is licensed under the MIT License - see the LICENSE file for details.
For questions or issues, please:
- Open an issue on GitHub
- Contact: [[email protected]]