Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove buggy premature sendPlayerPos optimization #15532

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

appgurueu
Copy link
Contributor

@appgurueu appgurueu commented Dec 8, 2024

Untested attempt at fixing #15528 based on a theory of what could go wrong: Since this particular packet is sent unreliably, it may arrive out of order. Of course the client thinks it has sent the last values (which to the client haven't changed) and thus doesn't resend them. But on the server, an outdated packet ends up overriding the new packet.

This attempted fix is very simple: Just remove the early return. I don't think this is problematic. A server will have no problem processing 10 trivial packets from a client per second, and the network won't care much either. There will be plenty of other packets going around anyways.

Side note: I believe get_player_control() is currently bogus for dead players, but I suppose that doesn't matter much.

@appgurueu appgurueu added @ Network Trivial The change is a trivial bug fix, documentation or maintenance change, as per the Git Guidelines Bugfix 🐛 PRs that fix a bug labels Dec 8, 2024
@appgurueu appgurueu force-pushed the fix/outdated-controls branch from bfe13f3 to 26c9660 Compare December 8, 2024 22:45
@sfan5 sfan5 removed the Trivial The change is a trivial bug fix, documentation or maintenance change, as per the Git Guidelines label Dec 8, 2024
@lhofhansl
Copy link
Contributor

I'm unclear on implications of removing the early out of the position is not changed.
Are legitimate reasons to send the player position repeatedly without changes?

@sfan5
Copy link
Collaborator

sfan5 commented Dec 9, 2024

The problem with the current approach is obvious but I don't think we should hand-wave the implications on server load.
I propose throttling the re-sending to once per second when the data hasn't changed.

(also obligatory mention that many potential problems in the space of "is this value outdated or fresh" are solved by #15014)

Untested attempt at fixing #15528 based on a theory of what could go wrong

Worth noting that if OP has tested in singleplayer then we can exclude this cause and the issue would lie elsewhere.

@appgurueu
Copy link
Contributor Author

The problem with the current approach is obvious but I don't think we should hand-wave the implications on server load.

I don't think this is handwavey at all. This packet is 47 bytes. That's next to nothing. Add some protocol overhead, let's say it ends up as 64 bytes. Currently there's a timer to send ten of them per second (though I would be inclined to increase that to twenty). That's half a KB (or a KB) per second per player. That's almost nothing in terms of bandwidth. Sending a single tiny packet with vital information (position, controls, look direction etc.) per step to the server must be acceptable for a game engine. 1

Just for fun, I joined CTF to look at my outgoing packets in Wireshark, and sure enough, player pos packets are not the concern here:

Screenshot from 2024-12-09 13-28-52

(Not to mention that the "optimization" only saves anything at all if a player is essentially completely idle from the server's point of view, and that there are of course many more packets going the other way.)

Capping this to once per second seems like unnecessary complexity and unnecessary potential jank to me.

also obligatory mention that many potential problems in the space of "is this value outdated or fresh" are solved by #15014

Yep, I'm aware of this, but I don't know when that will happen. (Also related: The proposal of using a reliable packet and #15475.)

Worth noting that if OP has tested in singleplayer then we can exclude this cause

Can we though? Supposedly UDP packet reordering can happen even on loopback.

Not to mention that we would still want to fix it for multiplayer scenarios.

Footnotes

  1. If we're concerned about too many packets and how to best batch them (à la Nagle's algorithm), that should be something for the transport layer to worry about (see Add QUIC support for Minetest #15261).

@sfan5
Copy link
Collaborator

sfan5 commented Dec 9, 2024

I don't disagree that the server should handle it (and it likely does), but looking for reasons not to reduce network traffic is the wrong approach.
The only downside is that possible bogus states last on average 500ms instead of 50ms.

the "optimization" only saves anything at all if a player is essentially completely idle from the server's point of view

It's not that rare for people to go afk or spend extended time in formspecs.

Not to mention that we would still want to fix it for multiplayer scenarios.

Sure. Just saying this this isn't necessarily "the" fix.

If we're concerned about too many packets and how to best batch them (à la Nagle's algorithm), [...]

Since they're latency-critical we won't ever want to apply batching to position updates.

@appgurueu
Copy link
Contributor Author

The only downside is that possible bogus states last on average 500ms instead of 50ms.

That is a tangible downside to me. A player will notice if a gun continues firing for half a second. It's unnecessary jank that could have been prevented at (as it seems to me so far) negligible cost (and a minuscule code simplification as a bonus).

Since they're latency-critical we won't ever want to apply batching to position updates.

I don't think this follows. As you can see we already send multiple other (likely not latency-critical) packets per step in a typical setting. The position update could just piggyback off those without any increase in latency if the number of packets sent turned out to be a problem. (Not that it would be necessary; just batching the other packets would likely already suffice to resolve that.)

Of course we don't want to batch separate position updates from different steps or to wait too long for packets after a position update, that would be silly.

@sfan5
Copy link
Collaborator

sfan5 commented Dec 11, 2024

My opinion on the current state of the PR remains 👎

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants