Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump VALIDATION_CODE_BOMB_LIMIT #7710

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

sandreim
Copy link
Contributor

@sandreim sandreim commented Feb 25, 2025

It used to be 16MiB some time ago. Bumping to 32MiB sounds sensible to me, as we've seen some recent runtimes expanding close to the previous 16MiB.

This is just a short term fix for #7664 - needs to be backported and deployed ASAP.

While this unblocks assethub, I will create another PR to promote it into a HostConfiguration parameter and associated runtime api.

@sandreim sandreim added T0-node This PR/Issue is related to the topic “node”. T8-polkadot This PR/Issue is related to/affects the Polkadot network. A4-needs-backport Pull request must be backported to all maintained releases. labels Feb 25, 2025
@sandreim
Copy link
Contributor Author

sandreim commented Feb 25, 2025

/cmd prdoc --bump patch --audience node_dev

Signed-off-by: Andrei Sandu <[email protected]>
Copy link
Member

@bkchr bkchr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can lead to disputes? When 3/4 + 1 have upgraded and some uploads such a big runtime, other nodes that do not support this will start disputes?

@sandreim
Copy link
Contributor Author

sandreim commented Feb 26, 2025

This can lead to disputes? When 3/4 + 1 have upgraded and some uploads such a big runtime, other nodes that do not support this will start disputes?

The same concern prevented me from merging it yet.

However, It should not. I have looked at the code and this error is handled as deterministic internal error. At worst, the PVF precheck would fail if not enough validators are upgraded. Otherwise if it passed precheck, unupgraded validators will not issue a dispute and just log an error.

@s0me0ne-unkn0wn can you confirm the above?

@alexggh
Copy link
Contributor

alexggh commented Feb 26, 2025

Otherwise if it passed precheck, unupgraded validators will not issue a dispute and just log an error.

But, they would still be no-shows and increase the finality lag for everyone, so with 1/4 of validators being no-shows and 500 validators, my back of envelope math tells me that finality lag on average will be around 15 blocks.

Rough Explanation:

  • Round 1: You need 30 assignments randomly, you can assume 1/4 will be no-shows so you have 7.5 no-shows.
  • Round 2: You need 8 tranches to cover the no-shows: 2.33 assignments per tranches * 8 = 18 assignments, 1/4 no-shows, 4.5 no-shows.
  • Round 3: You need 5 tranches to cover the no-shows: 2.33 * 5 = 11.65 assignment, 2.91 no-shows
  • Round 4: You need 3 tranches: 2.33 * 3 = 6.99, 1.74 no-shows
  • Round 5: You need 2 tranches: 2.33 *2 = 4.66: 1.16 no-shows, depending on how lucky we are we should be able to finalize here or the next round.

@eskimor
Copy link
Member

eskimor commented Feb 26, 2025

Apart from that it should be 2/3rds + 1, this concern sounds valid. I think the best we can do here is roll this out in two phases to be safe:

  1. Bump the limit only if not pre-checking.
  2. Bump the limit in pre-checking too once enough nodes upgraded.

Assuming validators are at least following minor version bumps, the time between (1) and (2) can hopefully be rather small, if we backport to all running releases. I guess validators at least following point releases is just wishful thinking though?

Alternative hack: Temporarily increase pre-checking threshold to something that makes the worst-case finality lag acceptable. Requires a runtime upgrade, so will also take a while - also the problem stays roughly the same: The bigger limit won't be usable until enough validators upgraded, but it is a bit safer - as the "enough validators upgraded" is enforced by the protocol.

@s0me0ne-unkn0wn
Copy link
Contributor

@s0me0ne-unkn0wn can you confirm the above?

We never dispute on the basis of pre-checking results. The reasoning is that the pre-checking results are the basis for future disputes by themselves. If the pre-checking has passed successfully, and later a preparation (with lenient timeout) fails, we dispute. If the pre-checking fails, we never enact the PVF in the first place, so there's no reason to dispute here. So, the pre-checking output is strictly binary.

Also, FWIW, the note on rejecting PVFs from docs:

It is possible that the candidate validation was not able to check the PVF, e.g. if it timed out. In that case, the PVF pre-checker will vote against it. This is considered safe, as there is no slashing for being on the wrong side of a pre-check vote.

Rejecting instead of abstaining is better in several ways:

  • Conclusion is reached faster - we have actual votes, instead of relying on a timeout.
  • Being strict in pre-checking makes it safer to be more lenient in preparation errors afterwards. Hence we have more leeway in avoiding raising dubious disputes, without making things less secure.

Also, if we only abstain, an attacker can specially craft a PVF wasm blob so that it will fail on e.g. 50% of the validators. In that case a supermajority will never be reached and the vote will repeat multiple times, most likely with the same result (since all votes are cleared on a session change). This is avoided by rejecting failed PVFs, and by only requiring 1/3 of validators to reject a PVF to reach a decision.

@bkchr
Copy link
Member

bkchr commented Feb 26, 2025

I think the best we can do here is roll this out in two phases to be safe:

1. Bump the limit only if not pre-checking.

2. Bump the limit in pre-checking too once enough nodes upgraded.

Or we directly do it via the HostConfiguration. Then we only need to wait once that enough people have upgraded.

@sandreim
Copy link
Contributor Author

Apart from that it should be 2/3rds + 1, this concern sounds valid.

If the concern is finality yes, I agree, the concern is increased finality lag, which we can mitigate by pinging validators to upgrade.

  1. Bump the limit only if not pre-checking.
  2. Bump the limit in pre-checking too once enough nodes upgraded.

This approach is faster than HostConfiguration approach since we don't need to wait for runtime upgrades.
Also, after step 1, there wouldn't be any improvement of the status quo - parachains will still not be able to upgrade their runtime, until we get to phase 2, which also requires waiting for nodes to be upgraded.

So, the options are, ordered by the how fast we fix the problem:

  • one-shot, this PR
  • 2 step node side approach, change validation limit, then pre-checking limit
  • Host configuration approach: requires sdk major release, fellowship release and enacting on relay chain

What we choose depends on how soon we need this to be fixed in production which depends on the impact. Are teams already affected or we know that soon they will be?

@bkchr
Copy link
Member

bkchr commented Feb 26, 2025

This approach is faster than HostConfiguration approach since we don't need to wait for runtime upgrades.

The HostConfiguration changes can be prepared and rolled out to validators quite fast. While we need to wait any way for them to upgrade, the runtime upgrade can happen in parallel.

@sandreim
Copy link
Contributor Author

This approach is faster than HostConfiguration approach since we don't need to wait for runtime upgrades.

The HostConfiguration changes can be prepared and rolled out to validators quite fast. While we need to wait any way for them to upgrade, the runtime upgrade can happen in parallel.

I don't think this will happen fast because it requires a stable release, as it is a major bump due to the HostConfiguration struct change.

@bkchr
Copy link
Member

bkchr commented Feb 26, 2025

I don't think this will happen fast because it requires a stable release, as it is a major bump due to the HostConfiguration struct change.

Then just implement this in a backwards compatible way.

@sandreim
Copy link
Contributor Author

sandreim commented Feb 27, 2025

I don't think this will happen fast because it requires a stable release, as it is a major bump due to the HostConfiguration struct change.

Then just implement this in a backwards compatible way.

I cannot find any way to change that struct without needing or adding more code bloat to handle a separate storage value + active / pending per session tracking of the parameter change.

So I'm proposing the 4th option:

  • Obsolete the constant VALIDATION_CODE_BOMB_LIMIT in the node primitives;
  • introduce a new constant in the runtime itself: MAX_VALIDATION_CODE_COMPRESSION_RATIO that is used to compute the bomb limit based on the max_code_size.
  • add a runtime API to retrieve the code bomb limit

Doing this will result in a limit of 30MB based on current configuration, and is also future proof, because if we increase the code size the bomb limit also increases. This will be a minor semver bump so we won't break anything and we can do a quick release. On the node side however it could be trickier with the semver, but we can manage that as only our node uses the affected crates.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A4-needs-backport Pull request must be backported to all maintained releases. T0-node This PR/Issue is related to the topic “node”. T8-polkadot This PR/Issue is related to/affects the Polkadot network.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants