-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Devilish resize bug #447
Comments
I tried to reproduce and localize the issue with the RFX performance degradation, and here is my findings: General observations
|
Interesting findings though it's not clear why this would cause a change in behavior before/after resize. Does the 30ms change before/after? Is FreeRDP's substantially faster?
I think if you look closer you'll find that that's just an ordinary, well formed mouse move fastpath PDU. If you look at similar FreeRDP packets you'll see that they're slightly different in that they always use the optional
IIRC FreeRDP shows the same "malformed" packet in the same place
One thing that may be worth trying is using another TLS library. We currently support rustls and native-tls, but only rustls gives us Another thing I vaguely recall, though don't hold me to this, is that at one point IronRDP packet captures were working fine, and only later did they break. It might be worth going back to a version from within the last 2.5 years or so (say, maybe when rustls support was added) and just sanity checking that this was the case. You could then try to narrow down precisely which commit broke wireshark TLS parsing to help narrow down the culprit. |
2 additional observations:
dst.write_u8(self.stream_priority.to_u8().unwrap());
dst.write_u16(cast_length!(
"uncompressedLength",
self.share_data_pdu.size()
- + PDU_TYPE_FIELD_SIZE
- + COMPRESSION_TYPE_FIELD_SIZE
- + COMPRESSED_LENGTH_FIELD_SIZE
)?);
ironrdp_connector::legacy::encode_share_data(
self.user_channel_id,
self.io_channel_id,
- 0,
+ share_id,
ShareDataPdu::FrameAcknowledge(FrameAcknowledgePdu {
frame_id: marker.frame_id.unwrap_or(0),
}),
output,
) |
Oof, I was re-validating my theory today and looks like it just not it... It is around 10-30 ms with software cursor processing, yes, but without it, it is actually pretty small and very comparable with FreeRDP, by taking up to 1-3 ms in general, only with spikes around 10ms (FreeRDP have some spikes too). 😢
You are right, this should be the valid PDU indeed! Looks like the updated course of action is (cc @awakecoding):
Also, tagging @CBenoit in case he have some ideas about this bug too |
Hey @pacmancoder, did we make any progress on some of these bugs we found?
This issue is creeping up again so I'm trying to get caught up on the previous investigation. |
Background
After adding protocol level support for dynamic resizes via the
DisplayControl
DVC inServer Deactivate All PDU
#418this was all actually hooked up in the
ironrdp-client
in #430. The initial attempt succeeded in allowing for dynamic resize, however after resizes performance of the client noticeably degraded after a resize or two. This issue does not reproduce with FreeRDP, meaning that it's a bug in IronRDP.Pointer settings and FreeRDP alignment
We tried many things to resolve the issue including:
(IronRDP rev
f7b4f546650231ce345e9ee67f6ad29b2b93f937
is aligned with FreeRDPba8cf8cf2158018fb7abbedb51ab245f369be813
)all to no avail.
We know the problem is related to the RemoteFX codepath -- when we switched to using bitmaps, it went away.
Profiling
Comparing flamegraphs between an IronRDP session with a resize vs without did not reveal anything of relevance to this issue: Download.
The absolute time of frame processing (basically how long active_stage.process(&mut image, action, &payload)? takes to run) was measured at <1ms, before and after resize. However, after a resize, the average time between frames (how often frame = framed.read_pdu() => { gets called) goes up 4-5x, 30ms vs 140-150ms).
Temp Solution
What finally gave us a temporary solution was upping the
maxUnacknowledgedFrameCount
from2
to>= 10
(credit to @probakowski for discovering this). We know that this is not any sort of root cause, because FreeRDP uses2
for this fieldImplications, what to examine next
The
maxUnacknowledgedFrameCount
plays a role in when the server sends us frame updates: after sending usmaxUnacknowledgedFrameCount
frames, it waits for an ACK response before sending the next frame. The fact that bumping this value more or less solves the performance issue suggests that the holdup is related to that mechanism. Logically this seems to imply that the problem is either<1ms
timeframe of active_stage.process(&mut image, action, &payload)? before and after resize, and that timescale is noticeable to the naked eye wrt performance.The text was updated successfully, but these errors were encountered: