You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am running speaker diarization on english dataset. I trained uisrnn model on custom english speaker's data.
When I run inference multiple time on same wav file, every time I get different start_time and end_time with different number of chunks per speaker.
I tried to debug code and I found that speaker embedding (VGG based) model is making different feature vector on multiple run on same audio.
And as speaker embedding changes, uisrnn model work accordingly and final diarization result is different.
If anyone has same kind of problem pls help me on this.
Thanks
The text was updated successfully, but these errors were encountered:
Hi,
Thank you so much for this work.
I am running speaker diarization on english dataset. I trained uisrnn model on custom english speaker's data.
When I run inference multiple time on same wav file, every time I get different start_time and end_time with different number of chunks per speaker.
I tried to debug code and I found that speaker embedding (VGG based) model is making different feature vector on multiple run on same audio.
And as speaker embedding changes, uisrnn model work accordingly and final diarization result is different.
If anyone has same kind of problem pls help me on this.
Thanks
The text was updated successfully, but these errors were encountered: