-
Notifications
You must be signed in to change notification settings - Fork 745
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transition block lookup sync to range sync #6122
Conversation
62375ef
to
4ec5594
Compare
Updated tests to assert a syncing chain is created. Plus removed down-scoring the peers for a long chain lighthouse/beacon_node/network/src/sync/block_lookups/mod.rs Lines 294 to 299 in 4ec5594
As a final item we could check that for a syncing chain if it reaches the end, assert that the block root matches that syncing root |
4ec5594
to
17aadad
Compare
} | ||
self.drop_lookup_and_children(*lookup_id); | ||
} | ||
None => (lookup.block_root(), lookup.peek_downloaded_block_slot()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran out of time looking through to find out the scenario when this occurs.
Its when we don't have any single block lookups that match the current parent tip. In this case we start a range sync based on the parents ancestor, rather than the parent_chain_tip?
I didn't fully parse this.
Overall I like the idea!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should never happen. I can log an error and return, but used this fallback value for "simplicity". Happy to change if this confuses more than helps
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yeah. I think that might be helpful. This way we can catch any wild states rather than doing a range sync. Also (at least for me, will make it easier to follow).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think we should test this out. Logic seems reasonable to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@mergify queue |
🛑 The pull request has been removed from the queue
|
@mergify refresh |
✅ Pull request refreshed |
@mergify queue |
✅ The pull request has been merged automaticallyThe pull request has been merged automatically at 71c5388 |
Issue Addressed
For context of the issue 👇
Closes #6099
Proposed Changes
This solution links range sync and lookup sync by:
blocks_by_range
from them. In the future we may use Add BeaconBlocksByRange v3 ethereum/consensus-specs#3845 to be sure what fork are we syncing from.I believe this change is the minimal solution to cover the corner case of #6099. It's not maximally efficient as it will download a section of blocks twice. However this situation is rare, and we just need some way to recover even if it's at the expense of some extra bandwidth.
Testing
There are plans to with EF devops to test nasty long non-finality situations, but not now. For now I can try to write representative unit sync tests with some sample scenarios. This change should not impact good network codepaths, so should have no downsides. Range sync will only be triggered when we need to discover a long unknown fork, which has only happened in LH in cooked testnets of lookup sync bugs.