-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about implementing behavior decoding based on CEBRA #193
Comments
Thanks for using CEBRA! Here are more details outlining the embedding training and plot details: https://cebra.ai/docs/cebra-figures/figures/Figure3.html#Figure-3h But I agree we can explain the knn deciding better here and show a coded example; cc @jinhl9 |
Thank you for your quick response! However, the code provided focuses on visualization of the results. I'm specifically interested in understanding the actual position decoding process - how to obtain the decoded positions from neural data, rather than plotting existing results. Would you be able to share some insights on the decoding implementation, especially for the (x,y) decoding? Thank you for your help! |
Yep! As I said, the figure notebook though answers your question 1 about how the encoders are trained. The decoding we can provide further details on, but in the meantime, it's the same as in the decoding notebook (just a different demo dataset) |
Thank you for your help and clarification! I have a last question about the behavior decoding approach, as I need to move forward with my experiments soon. Since CEBRA-Behavior in Figure 3h is trained using both neural activity and both active and passive movements, the embeddings inherently contain behavioral information. I'm concerned that performing decoding (x,y) on these behavior-informed (or (x,y)-informed) embeddings might constitute label leakage in machine learning terms. However, I may be misinterpreting the implementation details and would appreciate your clarification on this point. Thanks! |
Thank you for developing CEBRA! The framework has been incredibly helpful for neural data analysis. While implementing CEBRA for my research project, I encountered some questions about the position decoding implementation that I hope you could help clarify.
Regarding Figure 3 h's CEBRA-Behavior results (trained with x,y position), I'm trying to understand the exact decoding procedure. While exploring the documentation and demo notebooks (e.g., https://cebra.ai/docs/demo_notebooks/Demo_primate_reaching.html), I couldn't find the specific decoding implementation details.
I would be very grateful if you could:
This would help ensure I'm implementing CEBRA correctly in my own analyses and taking full advantage of its capabilities. I believe other researchers in the community would also benefit from this clarification.
Thank you for your time and continued support of the CEBRA project!
The text was updated successfully, but these errors were encountered: