Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make approval-distribution aggression a bit more robust and less spammy #6696

Draft
wants to merge 1 commit into
base: alexggh/fix_possible_bug
Choose a base branch
from

Conversation

alexggh
Copy link
Contributor

@alexggh alexggh commented Nov 28, 2024

After finality started lagging on kusama around 2025-11-25 15:55:40 nodes started being overloaded with messages and some restarted with

Subsystem approval-distribution-subsystem appears unresponsive when sending a message of type polkadot_node_subsystem_types::messages::ApprovalDistributionMessage. origin=polkadot_service::relay_chain_selection::SelectRelayChainInner<sc_client_db::Backend<sp_runtime::generic::block::Block<sp_runtime::generic::header::Header<u32, sp_runtime::traits::BlakeTwo256>, sp_runtime::OpaqueExtrinsic>>, polkadot_overseer::Handle>

I think this happened because our aggression in the current form is way too spammy and create problems in situation where we already constructed blocks with a load of candidates to check which what happened around #25933682 before and after. However aggression, does help in the nightmare scenario where the network is segmented and sparsely connected, so I tend to think we shouldn't completely remove it.

The current configuration is:

l1_threshold: Some(16),
l2_threshold: Some(28),
resend_unfinalized_period: Some(8),

The way aggression works right now :

  1. After L1 is triggered all nodes send all messages they created to all the other nodes and all messages they would have they already send according to the topology.
  2. Because of resend_unfinalized_period for each block all messages at step 1) are sent every 8 blocks, so for example let's say we have blocks 1 to 24 unfinalized, then at block 25, all messages for block 1, 9 will be resent, and consequently at block 26, all messages for block 2, 10 will be resent, this becomes worse as more blocks are created if backing backpressure did not kick in yet. In total this logic makes that each node receive 3 * total_number_of messages_per_block
  3. L2 aggression is way too spammy, when L2 aggression is enabled all nodes sends all messages of a block on GridXY, that means that all messages are received and sent by node at least 2*sqrt(num_validators), so on kusama would be 66 * NUM_MESSAGES_AT_FIRST_UNFINALIZED_BLOCK, so even with a reasonable number of messages like 10K, which you can have if you escalated because of no shows, you end-up sending and receiving ~660k messages at once, I think that's what makes the approval-distribution to appear unresponsive on some nodes.
  4. Duplicate messages are received by the nodes which turn, mark the node as banned, which may create more no-shows.

Proposed improvements:

  1. Make L2 trigger way later 28 blocks, instead of 64, this should literally the last resort, until then we should try to let the approval-voting escalation mechanism to do its things and cover the no-shows.
  2. On L1 aggression don't send messages for blocks too far from the first_unfinalized there is no point in sending the messages for block 20, if block 1 is still unfinalized.
  3. On L1 aggression, send messages then back-off for 3 * resend_unfinalized_period to give time for everyone to clear up their queues.
  4. If aggression is enabled accept duplicate messages from validators and don't punish them by reducting their reputation which, which may create no-shows.

@alexggh alexggh marked this pull request as draft November 28, 2024 15:03
@paritytech-workflow-stopper
Copy link

All GitHub workflows were cancelled due to failure one of the required jobs.
Failed workflow url: https://github.com/paritytech/polkadot-sdk/actions/runs/12071332389
Failed job name: test-linux-stable

@burdges
Copy link

burdges commented Nov 28, 2024

Is this only approval votes? Or also approval assigment annoucements? It's maybe both buit you do not notice the annoucements much since we merged tranche zero?


I've worried that our topology sucks for ages now: We've two disjoint topologies which do not benefit from each other, a 2d gride and a random graph. I've no idea how one justifies two disjoint topologies with no synergy between them. At 1000 node the grid needs 31.6-1=30 message per layer for two layers. And the random graph gets pretty dense too.

Instead I'd propose roughly some unified slimmer topology, like..

A 3d grid, so 10-1=9 message per layer for three layers, but which unifies with log2(1000) = 10 extra random hops, or maybe its ln(1000)=7, and which enforce some unified ttl slightly larger than 3.

If Alice -> Bob -> Carol is a grid path, then Carol still tries sending to all 18 of her neighbors who are not neighbors of Bob, as well as 10 random ones. Alice and Bob also sent to 10 randoms. If Dave is one of Carol's neighbors, and the ttl>4, then Dave tries sending to all 18 of his neighbors that're not neighbors of Carol, and another 10 randoms. If Rob is someone's random, then Rob sends to all 27 of his neighbors, and another 10 randoms.

We still have an aggression problem: Are we sending the message directly or asking the other side if they'd like the message? We'd ideally keep all messages small and send them unrequested.

You recieve a message whenever someone in your 27 3d grid neighbors recieves it, so this does not get worse vs the 2d grid. In expectation, you also recieve a message 10 times from random guys too, so that's 37 incoming messages per message on the network, likely much less than currently.

We might dampen the grid at exactly layer 3, meaning Carol sends to like 20 randoms or something, which assumes Bob was honest.

We still have a finishing problem caused by our ttl: What happens if I never get a message? We should think about how this impacts everything.

Anyways I've discussed this some with @chenda-w3f but it turns out the distributed systems theory says very little about the really optimal scheme is here. It's maybe worth discussing at the retreat..

@@ -514,6 +514,8 @@ struct BlockEntry {
vrf_story: RelayVRFStory,
/// The block slot.
slot: Slot,
/// Backing of from re-sending messages to peers.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// Backing of from re-sending messages to peers.
/// Backing off from re-sending messages to peers.

// multiples times. This means it is safe to accept duplicate messages without punishing the
// peer and reduce the reputation and can end up banning the Peer, which in turn will create
// more no-shows.
fn accept_duplicates_from_validators(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would safer to accept only a bounded amount of duplicates over time.

@@ -2255,18 +2307,42 @@ impl State {
&self.topologies,
|block_entry| {
let block_age = max_age - block_entry.number;
// We want to resend only for blocks between min_age and min_age +
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe it should be sufficient that we trigger aggression only for oldest unfinalized block. From what I've seen in production and testing is that usuallt are a few unapproved candidates holding finality on a particular block.

Resending for all unfinalized blocks doesn't really make sense, it's better to make small incremental progress than send large bursts of messages. So I would just remove the resend_unfinalized_period parameter and code.

Am I missing anything ?

Copy link
Contributor Author

@alexggh alexggh Nov 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Am I missing anything ?

I don't think you are, I was think about the same thing, but I did not know the initial reason for using resend_unfinalized_period so I was trying to avoid it entirely, so yeah I'll look into resending it just for the last unfinalized.

modify_reputation(
&mut self.reputation,
network_sender,
if !Self::accept_duplicates_from_validators(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

above log could show if duplicate is accepted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants