You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The initial performance numbers were about 1/5-1/6 of what DotNetty can do, and one of the reasons why I believe this is the case is that gRPC performs a flush on each write to the stream - and you're required to wait for that flush to complete prior to sending the next message. Thus we have a flow control problem that eats up an enormous amount of CPU and destroys throughput.
I think we can do this by simply modifying our Protobuf message definitions to contain arrays of ByteString rather than a single payload - and we can use semantics similar to what the BatchWriter does.
The text was updated successfully, but these errors were encountered:
The initial performance numbers were about 1/5-1/6 of what DotNetty can do, and one of the reasons why I believe this is the case is that gRPC performs a flush on each write to the stream - and you're required to wait for that flush to complete prior to sending the next message. Thus we have a flow control problem that eats up an enormous amount of CPU and destroys throughput.
We need to do what we did in DotNetty a long time ago and add something similar to our https://github.com/akkadotnet/akka.net/blob/dev/src/core/Akka.Remote/Transport/DotNetty/BatchWriter.cs - which is able to group pending writes together into a continuous (but frame-length encoded) chunk that gets flushed as soon as the transport is ready again.
I think we can do this by simply modifying our Protobuf message definitions to contain arrays of
ByteString
rather than a single payload - and we can use semantics similar to what theBatchWriter
does.The text was updated successfully, but these errors were encountered: