You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Soundstorm is a single model that models each codebook hierarchically. It is not 2 models, in which the first one only models the first codebook, and the second modeling the rest.
The text was updated successfully, but these errors were encountered:
Please kindly refer to the audiolm and soundstorm paper for their implementation, which I understand is more than a single model. Thanks!
In the soundstorm paper, they already obtain the semantic tokens from AudioLM. Their AudioLM tokens are equivalent to the T2S model output in MaskGCT. However, their S2A model, which is the soundstorm, is indeed a single model that generates all RVQ layers hierachically using one model. You probably confuse the AudioLM with a model that only generates the first RVQ codebook. That's why you break the S2A into two models.
In fact, the reason we used two models was simply that it was easier to debug at the initial experimental stage (we only needed to generate the acoustic token layer to reconstruct speech). We tried using one model, and there was no significant performance drop. I don't think it makes much difference.
Soundstorm is a single model that models each codebook hierarchically. It is not 2 models, in which the first one only models the first codebook, and the second modeling the rest.
The text was updated successfully, but these errors were encountered: