Skip to content

Connectome-constrained Latent Variable Model of Whole-brain Neural Activity, ICLR 2022

Notifications You must be signed in to change notification settings

TuragaLab/wormvae

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

wormvae

We provide offical PyTorch implementations for connectome-constrained latent variable model of whole-brain neural activity.

The details of algorithm are presented in the paper Connectome-constrained Latent Variable Model of Whole-brain Neural Activity, ICLR 2022.

Prerequisites

  • Linux or macOS
  • Python 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Installation

  • Clone this repo:
git clone https://github.com/TuragaLab/wormvae.git
cd wormvae
  • Install dependencies by using conda env create -f environment.yml.
  • Activate conda enviroment by using conda activate wormvae.

Datasets Visualization

Worm Connectivity

Worm Activity

Scripts Usage

Model Training

Call the following script in experiment/ to train a latent variable model for whole-brain neural activity.

python main.py --neuron_holdout list_of_neuron_holdout --train_worm list_of_worm_id --model_type model_type --constraint constraint --random_init_index random_init_index 
  • --neuron_holdout defines the set of neurons to be held out during training. You can specify a list of neuron names to choose which neurons (typically a pair of symmetric neurons) to be held out, e.g. --neuron_holdout ['BAGL','BAGR'].
  • --train_worm defines the set of worms to be trained. You can specify a list of worm ids to choose which worms to be trained on, e.g. --train_worm [1].
  • --model_type defines the types of synapse modeling. You can use --model_type conductance to specify a conductance-based modeling, while --model_type current to specify a current-based modeling.
  • --constraint defines the constraint to synapse weight. --constraint weight indicates using connectome synapse count to constrain the weight; --constraint sparsity indicates using connectome synapse sparsity (binarized matrix) to constrain the sparsity, while the magnitude is still trainable; --constraint unconstrained indicates without connectome constrained, the weight is fully-connected matrix and the magnitude is trainable.
  • --random_init_index defines the index of random initialization in each training trial. In the paper, we use 4 different random initializations to evaluate the models, e.g. --random_init_index 0.

Results

Checkpints are saved as experiment/chechpoints/task_name.pt files , the training logging are saved as experiment/logs/task_name.log file; and the training loss trajectory are saved in the experiment/loss_trajectories/task_name.pickle. task_name is automatically defined with the options of arguments (neuron holdout, train worm, model types, constraint, etc.) described above.

Sample Results

  • Neuron holdout under different type of synapse model types and levels of connectome constraints.
  • Measured fluorescence traces (including unrecorded neurons) and inferred voltage for whole-brain (300 neurons).

About

Connectome-constrained Latent Variable Model of Whole-brain Neural Activity, ICLR 2022

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published