-
Thank you very much for providing the code for CEBRA! I would like to use CEBRA to examine changes in embeddings as mice (each mouse separately first) learn to track a visual object over several recording sessions. I have two-photon calcium imaging data from a varying number of cells over four separate sessions. Each session consists of 50 trials for which I record the continuous position of the visual object as well as continuous body kinematics (the body kinematics include whole-body, whisker and paw motion which are correlated and predictive of visual object position). I am wondering if training a model using data from later sessions produces embeddings that more accurately reflect the position of the visual object than data from earlier sessions. So far, I have set up a model using joint-training (in this example for an early and late session). The behavioral variables used during fitting include body kinematics and visual object position: multi_cebra_model_temp = cebra.CEBRA(
model_architecture="offset10-model",
conditional='time_delta',
distance='cosine',
batch_size=1024,
output_dimension=6,
max_iterations=10000,
max_adapt_iterations=100,
temperature = 1,
temperature_mode="auto",
min_temperature=0.1,
learning_rate = 0.001,
time_offsets = 500,
device = "cuda_if_available",
verbose = True)
multi_cebra_model_temp.fit(
[neural_data_session1, neural_data_session4],
[behavioral_data_session1, behavioral_data_session4]
) (I set Joint-training produces very similar embeddings. However, when I train models on early and late sessions separately, the embeddings seem to suggest that late sessions more accurately reflect the position of the visual object. I have not yet decoded the actual position of the visual object. My questions are:
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
Hi @cgaletzka , thanks a lot for your questions. This sounds like a great application setup for CEBRA. Before going into the different items, one clarification:
This is expected and the purpose of joint training. You explicitly align embeddings across the different sessions/animal passed when doing joint training. So individual training is what you need to address your research question here. To your specific questions:
You would train separate embeddings for each condition. Make sure to control for important confounding variables, e.g. the number of neurons that are included (if applicable), and also estimate the differences in the metrics both across animals and across sessions.
Both makes sense. To compare embeddings directly, you might want to consider the consistency metric or the InfoNCE loss. Comparing decoding accuracy on top is an additional good say to compare. Here is an example plot (for different brain areas vs. different sessions, but the idea is exactly the same): Note, obtaining numbers for the middle row of the confusion matrix is really critical. In your scenario, you might want to consider estimating consistency across animals for the same session in training, and across sessions within the same animal, and try to disambiguate these two factors in statistical analysis.
This might be possible as well, but depends on the nuances. Could you maybe check if my replies above address your use-case already, and then potentially get back to this Q? If this addresses your questions, feel free to mark this as an answer. Otherwise, happy to discuss more! |
Beta Was this translation helpful? Give feedback.
Hi @cgaletzka , thanks a lot for your questions. This sounds like a great application setup for CEBRA.
Before going into the different items, one clarification:
This is expected and the purpose of joint training. You explicitly align embeddings across the different sessions/animal passed when doing joint training. So individual training is what you need to address your research question here.
To your specific questions:
You would train separate embeddings for each condition. Make sure to control for impor…