Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't recompress unchanged packets #3240

Draft
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

Janmm14
Copy link
Contributor

@Janmm14 Janmm14 commented Jan 8, 2022

Based on #3238 (comment)

old I created a working example of keeping the compressed data until we change the packet.

However as of now it does not work and I am unable to find the problem and I hope its not because I'm too tired.

It works when I stop allow letting this if to evaluate to true: https://github.com/SpigotMC/BungeeCord/pull/3240/files#diff-c1c0d1f8d0b4bc645d9a23cd0daee5755bd8f3f58af400763d8ccfb0bbe5ac06R70

Edit: I am definitely too tired. I just sliced too late.

Old, oiginal WIP code: https://github.com/Janmm14/BungeeCord/tree/dont-re-compress-unchanged-packets-orig

Plans for this PR:

  • Put the EntityMap newId = oldId checks in their own commit
  • Deduplicate EntityMap code for older mc versions, will stay part of this PR

Changes

  1. This PR first refactors EntityMaps as the no-recompress changes involve changes in the EntityMaps and that code was not that nice to look at.

  2. Then the main changes happen:

    • PacketWrapper can contain a ByteBuf compressed
    • The PacketDecompressor creates PacketWrappers on the pipeline for incoming compressed packets instead of the MinecraftDecoder containg decompressed and compressed buffers.
    • MinecraftDecoder edited to support both ByteBuf and PacketWrappers as input.
    • MinecraftDecoder removes compressed data for fully decoded packets.
    • Write functions to the pipeline write the whole PacketWrapper instead of its buf.
    • If present, PacketCompressor can now also handle PacketWrappers to write and will write the original compressed contents if available.
    • Otherwise, Varint21LengthFieldPrepender can also handle PacketWrappers and will just write its buf.
  3. At the end, I put in further optimizations of the EntityMaps, including to no longer involve the connections lock when a SpawnPlayer packet is sent with an online mode uuid (this is the default with ip_forward enabled).

Testing

As these changes are rather large, I am asking for some people to test these changes additionally to me.
I am fairly confident that the EntityMap changes are bug-free as 99% of the code I deduplicated was exactly identical.

Test build: https://ci.janmm14.de/job/public~mcjava~bungeecord-norecompress-entitymap/

@Janmm14 Janmm14 force-pushed the dont-re-compress-unchanged-packets branch 3 times, most recently from 77eab47 to 272de22 Compare January 9, 2022 20:06
@Janmm14 Janmm14 marked this pull request as ready for review January 9, 2022 20:36
@andreasdc
Copy link

Is this going to be next dusty PR that can potentially bring performance boost without any complications?

@xism4
Copy link
Contributor

xism4 commented Jan 30, 2022

Is this going to be next dusty PR that can potentially bring performance boost without any complications?

Well, remember that we need to wait md_5 to see if will work good stability it's more important.

@Janmm14
Copy link
Contributor Author

Janmm14 commented Jan 30, 2022

Yeah the problem is that I do not operate any server where I could test this on.
Thats why I even provide test builds so someone could test it beside me.

But apart from the entity map rework the changes should not cause subtle problems - either the packet works or it doesn't.

@andreasdc
Copy link

Yeah the problem is that I do not operate any server where I could test this on. Thats why I even provide test builds so someone could test it beside me.

But apart from the entity map rework the changes should not cause subtle problems - either the packet works or it doesn't.

I can test it if will send it to me as flamecord patch.

@xism4
Copy link
Contributor

xism4 commented Feb 2, 2022

try he's repository and check if its working

@andreasdc
Copy link

:qeq:, Just try he's repository and check if its working anyways i tried and had 0 issues.

I would test this in production :pep:

@andreasdc
Copy link

:qeq:, Just try he's repository and check if its working anyways i tried and had 0 issues.

I would test this in production :pep:

You are really fan of FlameCord as i see XD, i would try Waterfall best for this.

I can't go back with security fixes etc, so the only way to test it in production by me is adding it to FlameCord.

@linsaftw
Copy link

I will implement this into FlameCord if you give me permission to. We can test it on many servers that agree on experimental security/performance fixes.

@Janmm14
Copy link
Contributor Author

Janmm14 commented Feb 16, 2022

I will implement this into FlameCord if you give me permission to. We can test it on many servers that agree on experimental security/performance fixes.

Permission granted for this patch to be included in Flamecord, Waterfall and IvanCord if contribution is given to me in the patch commit notes and the patch-adding commit.
Forks of the listed projects may inherit these patches or ask for permission separately.

This patch could break stuff like ViaVersion-on-bungee or other bungee plugins which do not restrict themself to the bungee api, especially if they interact with the netty pipeline.

@MrIvanPlays
Copy link
Contributor

IvanCord already breaks Via plugins so ima implement it just like some of the other patches you have :)

@MrIvanPlays
Copy link
Contributor

I keep getting time outs. compression-threshold: 256 both on the server im connecting and bungee.

@andreasdc
Copy link

@Janmm14

@Janmm14
Copy link
Contributor Author

Janmm14 commented Mar 2, 2022

Honestly I have no clue what might be going wrong @MrIvanPlays
Someone else might have to add some brain power.

@MrIvanPlays
Copy link
Contributor

I will reimplement later and share the patch to make sure I haven't implemented something wrong.

@MrIvanPlays
Copy link
Contributor

interesting... now it works fine... ok java ig

@andreasdc
Copy link

andreasdc commented Mar 3, 2022

interesting... now it works fine... ok java ig

Your patch looks shorter, is that's all what is needed?
MrIvanPlays/IvanCord@dfdc7d7
If yes I can test it too :)

@Janmm14
Copy link
Contributor Author

Janmm14 commented Mar 3, 2022

@MrIvanPlays
You need to invalidate the compressed buffer when entity rewriting changes the buffer contents.

@andreasdc
Copy link

andreasdc commented Mar 3, 2022

@MrIvanPlays You need to invalidate the compressed buffer when entity rewriting changes the buffer contents.

BTW No recompressing means that if you have 4kb chunk packet you won't compress it to smaller size?

@Janmm14
Copy link
Contributor Author

Janmm14 commented Mar 3, 2022

BTW No recompressing means that if you have 4kb chunk packet you won't compress it to smaller size?

No. This patch lowers cpu usage of bungee if your spigot servers have compresion threshold not disabled.

In case of packets bungee does not need to edit (like chunk packets), this patch will keep and send the compressed data. This means bungee does not need to compress data again which it already recieved compressed.

@andreasdc
Copy link

BTW No recompressing means that if you have 4kb chunk packet you won't compress it to smaller size?

No. This patch lowers cpu usage of bungee if your spigot servers have compresion threshold not disabled.

In case of packets bungee does not need to edit (like chunk packets), this patch will keep and send the compressed data. This means bungee does not need to compress data again which it already recieved compressed.

What if i have bukkit threshold set to 4000 for example? It will send like this to the client? It will be pretty big packet.

@Janmm14
Copy link
Contributor Author

Janmm14 commented Mar 3, 2022

What if i have bukkit threshold set to 4000 for example? It will send like this to the client? It will be pretty big packet.

Bungee's threshold is still respected. f bukkit threshold is at 4000 and bungee threshold at 256, packets between size 256 and 4000, as well as any packets bungee had to change, are still getting compressed by bungee.

@andreasdc
Copy link

andreasdc commented Mar 3, 2022

BTW No recompressing means that if you have 4kb chunk packet you won't compress it to smaller size?

No. This patch lowers cpu usage of bungee if your spigot servers have compresion threshold not disabled.
In case of packets bungee does not need to edit (like chunk packets), this patch will keep and send the compressed data. This means bungee does not need to compress data again which it already recieved compressed.

What if i have bukkit threshold set to 4000 for example? It will send like this to the client? It will be pretty big packet.

Bungee's threshold is still respected. f bukkit threshold is at 4000 and bungee threshold at 256, packets between size 256 and 4000, as well as any packets bungee had to change, are still getting compressed by bungee.

So they will be decompressed and compressed, right?

@Janmm14
Copy link
Contributor Author

Janmm14 commented Mar 3, 2022

So they will be decompressed and compressed, right?

If bukkit's threshold is at 4000, packets below 4000 are not compressed by bukkit.
Packets of size 4000 or greater (arrive compressed at bungee) which need to be edited by bungee will be uncompressed and compressed (after changes were made) on bungee.

@andreasdc
Copy link

andreasdc commented Mar 3, 2022

So without the same threshold on bukkit it won't bring anything, right?

@Janmm14
Copy link
Contributor Author

Janmm14 commented Mar 3, 2022

So without the same threshold on bukkit it won't bring anything, right?

I think you misunderstood.
Using bukkit threshold 4000 and bungee threshold 256.

Packets < 256 will stay uncompressed always.
Packets >= 256 and < 4000 will get compressed on bungee and transferred uncompressed between bungee and bukkit.
For packets > 4000 this change will have impact, given the specific packet does not need to be edited on bungee (for example the case for chunk packets). For these packets, they will be sent compressed by bukkit to bungee. This change will make bungee then keep the compressed data and send the kept compressed data to the player. This means in this case, bungee no longer has to compress the packet.

@andreasdc
Copy link

But you said that bungee's threshold is still respected, so you will send 4000 packet with 256 threshold?

@MrIvanPlays
Copy link
Contributor

@Janmm14 I have entity rewrite disabled.

@Janmm14
Copy link
Contributor Author

Janmm14 commented Mar 3, 2022

Minimum to start compressing or how does it work? So with 256 threshold packet size won't be 256 (b)?

when the threshold is at 256, packets with size <= 255 are sent uncompressed and packets with size >= 256 are sent compressed

@andreasdc
Copy link

But you said that bungee's threshold is still respected, so you will send 4000 packet with 256 threshold?

the threshold is a minimum. 4000 > 256

Minimum to start compressing or how does it work? So with 256 threshold packet size won't be 256 (b)?

when the threshold is at 256, packets with size <= 255 are sent uncompressed and packets with size >= 256 are sent compressed

Exactly, compressed to 256, right? When you don't recompress, you send 4000 packet, yes?

@Janmm14
Copy link
Contributor Author

Janmm14 commented Mar 3, 2022

Exactly, compressed to 256, right? When you don't recompress, you send 4000 packet, yes?

When I talk about packet size I mean the size of uncompressed packets (in bytes).
Compression is similar to putting a file into a zip folder. The size of compressed data of packets is unrelated to the threshold. The size of the compressed data depends on how well the uncompressed data can be compressed. A packet of size 4000 could lead to compressed data size of for example 1913 or 3468 bytes.
Afaik the minecraft protocol requires packets with size greater than the set compression threshold to be compressed.

@andreasdc
Copy link

I thought 256 threshold would not allow packet with compressed size larger than 256 bytes.

@Janmm14
Copy link
Contributor Author

Janmm14 commented Mar 3, 2022

I thought 256 threshold would not allow packet with compressed size larger than 256 bytes.

Well thats wrong. 256 threshold means that only packets where the uncompressed size is found to be at least 256 bytes will be compressed before sending.

Reason:
Attempting to compress small amounts of data is often inefficient as there are not enough bytes for compression algorithms to find something which could be compressed.
Trying to compress very small amounts of data could lead to the compressed data to be bigger than the uncompressed data. Even if 128 bytes might be able to get compressed to 100 bytes, it is not enough to justify the cpu time of compression and decompression algorithms.

@andreasdc
Copy link

andreasdc commented Mar 3, 2022

I thought 256 threshold would not allow packet with compressed size larger than 256 bytes.

Well thats wrong. 256 threshold means that only packets where the uncompressed size is found to be at least 256 bytes will be compressed before sending.

Reason: Attempting to compress small amounts of data is often inefficient as there are not enough bytes for compression algorithms to find something which could be compressed. Trying to compress very small amounts of data could lead to the compressed data to be bigger than the uncompressed data. Even if 128 bytes might be able to get compressed to 100 bytes, it is not enough to justify the cpu time of compression and decompression algorithms.

Ok, I think I understand now. So if threshold doesn't impact the size, I can't wait for your patches to be implemented in Bungee. If @MrIvanPlays did everything right just with entity rewrite disabled, I'm going to test it too. 🚀 Thanks for explanation and work with this patch. ❤️
Edit: After short testing 0 problems.

@MrIvanPlays
Copy link
Contributor

@andreasdc do you have discord so we can chat?

skipString( buf, Short.MAX_VALUE );
}

public static void skipString(ByteBuf buf, int maxLen)
Copy link
Collaborator

@Outfluencer Outfluencer Nov 17, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are two methods for this necessary?
As we skip the bytes anyway i think we don't need a extra limit and just skip the len

@caoli5288
Copy link
Contributor

Perhaps a tunneling protocol could also remove unnecessary decompression.

Just like this.

bytes varint bytes
magic numers packet type compressed packet data

I did this in my private branch with cpu usage from1000% to 200% cpu per thousand players.

@caoli5288
Copy link
Contributor

@Janmm14 I have entity rewrite disabled.

Doesn't this cause problems? For example, the player's status is incorrect.

@Janmm14
Copy link
Contributor Author

Janmm14 commented Dec 22, 2022

@caoli5288 wrote:

@Janmm14 I have entity rewrite disabled.

Doesn't this cause problems? For example, the player's status is incorrect.

Bungee doesn't use entity rewrite for newer mc versions.
Waterfall added the same earlier and for all mc versions.
The method sends new JoinGame packet (or so), so client's player entity id gets changed in the client.

@md-5
Copy link
Member

md-5 commented Dec 22, 2022

It does for offline mode

@Janmm14
Copy link
Contributor Author

Janmm14 commented Dec 22, 2022

It does for offline mode

Ah yes, but thats limited to a few uuid rewrites.

@Outfluencer
Copy link
Collaborator

bump

@andreasdc
Copy link

1.20.2 update? 🚀 There are changes to PacketWrapper

@Janmm14 Janmm14 force-pushed the dont-re-compress-unchanged-packets branch 2 times, most recently from 9dbb458 to bc8151e Compare October 31, 2023 14:44
@Janmm14 Janmm14 force-pushed the dont-re-compress-unchanged-packets branch from bc8151e to a4449be Compare November 9, 2023 16:31
@Janmm14 Janmm14 marked this pull request as draft April 23, 2024 19:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants