Replies: 2 comments 4 replies
-
Hi @Jinwoo-Yi , this is due to the low number of bins, |
Beta Was this translation helpful? Give feedback.
-
I am encountering a similar issue when trying to compute the consistency score across 12 sessions. Each session involves the presentation of two image frames, either labeled as 3 or 8. As a result, my labels are structured as a 2D array with 12 rows (one for each session) and approximately 1000 columns, where each element is either 3 or 8. The problem arises when I set
When I set the num_discretization_bin to 2, I receive a warning:
In this case, the consistency score does not generate any meaningful result. For the range between 3 and 6, all consistency scores are 99%. Could you please help clarify what might be causing these issues and how I could adjust the parameters to compute the consistency score correctly? |
Beta Was this translation helpful? Give feedback.
-
Dear CEBRA team,
I am currently preparing my thesis using your excellent package, for which I am deeply grateful.
In my work, embeddings were learned based on contrasting discrete labels (including four levels). To calculate the consistency score across datasets, I set n_bins = 4, following the recent discussion here. However, I encountered an issue where all pairs' consistency scores were estimated as 1, an extremely high R-squared value.
Additionally, I observed the same result when calculating this metric using (1) pseudo-CEBRA embeddings with permuted discrete labels and (2) UMAP embeddings. I have attached the script used to compute the consistency score for your reference.
Could you assist me in identifying any potential issues here?
Thank you very much for your continuous support and active interactions with users.
Sincerely,
Jinwoo
Beta Was this translation helpful? Give feedback.
All reactions