Inference of feature importance via time-series perturbation approach? #133
-
Hello, CEBRA team! I'm currently utilizing CEBRA for my MS thesis, and I believe there's significant room for improvement in the platform, particularly concerning the challenge of interpreting which features in the input (such as neurons or EEG channels) are primarily engaged in defining the latent dimensions. To address this limitation, I came up with leveraging a time-series perturbation approach, such as Dynamask. Let's consider a scenario where I've extracted (t, X_m) CEBRA embeddings from the (t, X_n) input, where t = timepoints, m = the number of latent dimensions, and n = the number of input features, with n > m. By employing a saliency mask map learned from the perturbation-based Explainable AI (XAI) approach with the (t, X_n) size, we can approximate which features at timestep T predominantly contributed to the calculation of m embeddings at the same timestep T. I'm curious to hear your thoughts on the potential of integrating a perturbation-based XAI approach into CEBRA and how the resulting XAI insights align with our understanding of the data. Looking forward to your feedback!
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi @Jinwoo-Yi thanks for feedbacks, we agree there is a ton more to do, and we are actively working in this direction. Please see our latest paper, which we will put into the CEBRA package: https://sslneurips23.github.io/paper_pdfs/paper_80.pdf |
Beta Was this translation helpful? Give feedback.
Hi @Jinwoo-Yi thanks for feedbacks, we agree there is a ton more to do, and we are actively working in this direction. Please see our latest paper, which we will put into the CEBRA package: https://sslneurips23.github.io/paper_pdfs/paper_80.pdf