You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, I'm so glad this project exists! I initially attempted to complete my project directly in Kotlin on the JVM, but the WebRTC support, even when wrapping the Google library, is not great - and using this library has been a joy in comparison.
Now, I'm able to use this library for two way real time audio. One end runs as a service, and the other end is made up of regular WebRTC clients (browsers/apps). While this works, I want to make sure that I'm not missing any best practices, especially when it comes to audio.
I am using Opus only and have been using https://docs.rs/opus/latest/opus/ for this. This works, although the library itself (and the libopus version it's using) is quite old. I've seen the examples in this repo that save audio to disk etc, but this only saves the frames as is, it does not do any decoding. Some specific questions:
Is this the best library for my application in Rust?
Do I need to do anything more than taking the frames as read from read_rtp and pass then to the encoder? Do I need to manually think about re-ordering frames or detect package loss explicitly?
How do I leverage FEC? I see that example codes request in band fec with Opus, but how to I use this when decoding?
The text was updated successfully, but these errors were encountered:
@SoftMemes using this package works quite well. (alternative is this library).
To make voice chat application you need to do:
capture mic (use cpal)
encode (opus .encode, here you can enable fec in encoder config)
send each encoded packet via write_rtp to the other peer
receive via read_rtp
pass each packet to the decoder (if fec is enabled and you detect previous packet was lost, run decode twice next time, once with fec=false and next time with fec=true to recover lost packet)
add the decoded audio samples into a ring buffer (or a vec)
play a chunk via cpal into system audio
this is a rough overview of things you need. webrtc_rs and str0m both ship with a basic packet re-orderer that might work well enough for you. although you'd need to use a jitter buffer (or write one yourself)
First of all, I'm so glad this project exists! I initially attempted to complete my project directly in Kotlin on the JVM, but the WebRTC support, even when wrapping the Google library, is not great - and using this library has been a joy in comparison.
Now, I'm able to use this library for two way real time audio. One end runs as a service, and the other end is made up of regular WebRTC clients (browsers/apps). While this works, I want to make sure that I'm not missing any best practices, especially when it comes to audio.
I am using Opus only and have been using https://docs.rs/opus/latest/opus/ for this. This works, although the library itself (and the libopus version it's using) is quite old. I've seen the examples in this repo that save audio to disk etc, but this only saves the frames as is, it does not do any decoding. Some specific questions:
read_rtp
and pass then to the encoder? Do I need to manually think about re-ordering frames or detect package loss explicitly?The text was updated successfully, but these errors were encountered: