You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been trying to run this sdk for multispeaker transcript generation from a wav file. However the results are really poor, not even a single word is identified correctly. Am i missing something or this sdk doesnt support 2 speakers
The text was updated successfully, but these errors were encountered:
Same issue here. Using only parameter "show_speaker_label=True" in input to client.start_stream_transcription, does not work. Actually the output is generated and the "transcript" parameter within the "alternative[0]" objects is returned, as well as the "speaker" parameter from "alternatives[0].items[0].speaker". But, as [aj7tesh said, there is NO match at all with the speech. It seems a totally casual and different audio at all.
Is there any other parameters configuration to be given in input?
I have been trying to run this sdk for multispeaker transcript generation from a wav file. However the results are really poor, not even a single word is identified correctly. Am i missing something or this sdk doesnt support 2 speakers
The text was updated successfully, but these errors were encountered: