Skip to content

Why are CEBRA embeddings in 3D usually lying on a circle? #47

Answered by stes
timonmerk asked this question in Q&A
Discussion options

You must be logged in to vote

@timonmerk , this a choice you can influence by picking loss function and model properties.

The offset10-model and cosine similarity metrics will learn an embedding on the hypersphere, while the offset10-model-mse with euclidean similarity gives you an embedding in Euclidean space (cf eg the behavior embedding on paper Figure 2 which is trained with this loss).

It depends a bit which latent space is more suitable for your application, but you are free to use either In CEBRA.

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@MMathisLab
Comment options

Answer selected by MMathisLab
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants