Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about implementing behavior decoding based on CEBRA #193

Open
zhuyu2025 opened this issue Oct 31, 2024 · 4 comments
Open

Questions about implementing behavior decoding based on CEBRA #193

zhuyu2025 opened this issue Oct 31, 2024 · 4 comments

Comments

@zhuyu2025
Copy link

Thank you for developing CEBRA! The framework has been incredibly helpful for neural data analysis. While implementing CEBRA for my research project, I encountered some questions about the position decoding implementation that I hope you could help clarify.

Regarding Figure 3 h's CEBRA-Behavior results (trained with x,y position), I'm trying to understand the exact decoding procedure. While exploring the documentation and demo notebooks (e.g., https://cebra.ai/docs/demo_notebooks/Demo_primate_reaching.html), I couldn't find the specific decoding implementation details.

I would be very grateful if you could:

  1. Clarify whether the decoding process uses:
    • Embeddings derived from neural activity alone, or
    • Embeddings that incorporate both neural and behavioral data
  2. Perhaps provide the decoding implementation of Figure 3h

This would help ensure I'm implementing CEBRA correctly in my own analyses and taking full advantage of its capabilities. I believe other researchers in the community would also benefit from this clarification.

Thank you for your time and continued support of the CEBRA project!

@MMathisLab
Copy link
Member

Thanks for using CEBRA! Here are more details outlining the embedding training and plot details: https://cebra.ai/docs/cebra-figures/figures/Figure3.html#Figure-3h

But I agree we can explain the knn deciding better here and show a coded example; cc @jinhl9

@zhuyu2025
Copy link
Author

Thanks for using CEBRA! Here are more details outlining the embedding training and plot details: https://cebra.ai/docs/cebra-figures/figures/Figure3.html#Figure-3h

But I agree we can explain the knn deciding better here and show a coded example; cc @jinhl9

Thank you for your quick response! However, the code provided focuses on visualization of the results. I'm specifically interested in understanding the actual position decoding process - how to obtain the decoded positions from neural data, rather than plotting existing results.

Would you be able to share some insights on the decoding implementation, especially for the (x,y) decoding?

Thank you for your help!

@MMathisLab
Copy link
Member

Yep! As I said, the figure notebook though answers your question 1 about how the encoders are trained. The decoding we can provide further details on, but in the meantime, it's the same as in the decoding notebook (just a different demo dataset)

@zhuyu2025
Copy link
Author

Thank you for your help and clarification! I have a last question about the behavior decoding approach, as I need to move forward with my experiments soon. Since CEBRA-Behavior in Figure 3h is trained using both neural activity and both active and passive movements, the embeddings inherently contain behavioral information. I'm concerned that performing decoding (x,y) on these behavior-informed (or (x,y)-informed) embeddings might constitute label leakage in machine learning terms. However, I may be misinterpreting the implementation details and would appreciate your clarification on this point. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants