You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I understand correctly, logits and log_probs are not equivalent, as log_probs must satisfy: torch.exp(log_probs).sum()==1 which is equivalent to torch.logsumexp(log_probs)=0
To get the log probabilities from logits we need to apply log_softmax, which ensures this property.
Hi, thanks for a great repo!
In the AudioGen paper, the linear interpolation is done on the log probabilities.
In the code, however, it is done on the logits:
audiocraft/audiocraft/models/lm.py
Line 362 in adf0b04
If I understand correctly,
logits
andlog_probs
are not equivalent, aslog_probs
must satisfy:torch.exp(log_probs).sum()==1
which is equivalent totorch.logsumexp(log_probs)=0
To get the log probabilities from logits we need to apply
log_softmax
, which ensures this property.Using either logits or log_probs works quite well.
My tests show some benefits for applying
log_softmax
before interpolation.Avihu
The text was updated successfully, but these errors were encountered: