feat: add chain sync protocol for block history synchronization#84
feat: add chain sync protocol for block history synchronization#840u-Y wants to merge 1 commit intoStabilityNexus:mainfrom
Conversation
WalkthroughImplemented a three-message P2P protocol ( Changes
Sequence Diagram(s)sequenceDiagram
participant NodeA as Node A<br/>(height=5)
participant NodeB as Node B<br/>(height=0, new)
NodeA->>NodeB: status(height=5)
NodeB->>NodeA: status(height=0)
Note over NodeB: NodeB detects height gap,<br/>requests blocks 1→5
NodeB->>NodeA: get_blocks(from=1, to=5)
NodeA->>NodeA: fetch_blocks_range(1, 5)
NodeA->>NodeB: blocks([B₁, B₂, B₃, B₄, B₅])
NodeB->>NodeB: validate_linkage() &<br/>add_blocks_bulk()
NodeB->>NodeB: height=5
Note over NodeA,NodeB: Both nodes now<br/>synchronized
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@main.py`:
- Around line 372-378: Capture the chain tip once before sending any data so the
state snapshot and the status message use the same height: read chain.height
(and the sync snapshot value used earlier) into local variables (e.g.,
snapshot_height and status_height) before calling writer.write or awaiting
writer.drain, build the status_msg using that captured height instead of
accessing chain.height again, and then write/drain and log as before; update
references to status_msg, writer.write, writer.drain, and any earlier sync
variable to use these captured variables so both payloads reflect the same tip.
- Around line 159-201: The issue: importing historical blocks via the "blocks"
message calls chain.add_blocks_bulk(received) which does not replay
transactions, so local state can diverge unless the node's state already matches
the peer tip; update the code and chain logic so bulk imports produce a correct
state: either (A) make the initial sync path (the code path that calls sync and
merges accounts) perform an atomic state replace when accepting a peer snapshot
(ensure sync returns a flag and you call a state_replace() /
chain.replace_state_atomic(...) before applying blocks), or (B) change
add_blocks_bulk (and/or introduce add_blocks_and_replay or
rebuild_state_from_history) to rebuild state by replaying transactions from the
earliest imported block (i.e., derive balances/nonces by applying each block's
transactions rather than only appending block headers). Locate references to
add_blocks_bulk, sync, chain.get_blocks_range, and the "blocks" message handler
and implement one of these fixes so historical imports do not rely on
preexisting local state.
In `@minichain/chain.py`:
- Around line 100-113: The add_blocks_bulk function must be atomic: first
convert each block_dict to Block (using Block.from_dict) and validate linkage
sequentially against a temporary tip via validate_block_link_and_hash without
mutating self.chain or self.last_block; if any validation fails return (False,
0). Only after the entire batch validates, acquire self._lock and append all
validated Block instances to self.chain (updating added appropriately) so the
extension is done under the lock as one operation; reference add_blocks_bulk,
Block.from_dict, validate_block_link_and_hash, self.last_block, self.chain and
self._lock when making the change.
In `@minichain/p2p.py`:
- Around line 231-239: The _validate_blocks_payload currently only checks for
dicts but lets malformed block dicts (e.g. {"foo":1}) through; update
_validate_blocks_payload to iterate payload["blocks"] and for each entry ensure
it's a dict and contains the required keys expected by Block.from_dict (e.g. the
canonical block field names used by Block.from_dict or a Block.REQUIRED_FIELDS
constant if present) and basic types where applicable, returning False if any
block is missing keys or has wrong types; reference _validate_blocks_payload and
Block.from_dict when making the check so the P2P boundary rejects malformed
block entries before attempting to construct Block instances.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 7fdf0b31-6a5d-423b-9fe5-258bd65e50cf
📒 Files selected for processing (3)
main.pyminichain/chain.pyminichain/p2p.py
| elif msg_type == "status": | ||
| import json as _json | ||
| peer_height = payload["height"] | ||
| my_height = chain.height | ||
|
|
||
| if peer_height > my_height: | ||
| writer = data.get("_writer") | ||
| if writer: | ||
| request = _json.dumps({ | ||
| "type": "get_blocks", | ||
| "data": { | ||
| "from_height": my_height + 1, | ||
| "to_height": peer_height | ||
| } | ||
| }) + "\n" | ||
| writer.write(request.encode()) | ||
| await writer.drain() | ||
| logger.info("📡 Requesting blocks %d~%d from %s", | ||
| my_height + 1, peer_height, peer_addr) | ||
| elif msg_type == "get_blocks": | ||
| import json as _json | ||
| from_h = payload["from_height"] | ||
| to_h = payload["to_height"] | ||
| blocks = chain.get_blocks_range(from_h, to_h) | ||
|
|
||
| writer = data.get("_writer") | ||
| if writer and blocks: | ||
| response = _json.dumps({ | ||
| "type": "blocks", | ||
| "data": {"blocks": blocks} | ||
| }) + "\n" | ||
| writer.write(response.encode()) | ||
| await writer.drain() | ||
| logger.info("📤 Sent %d blocks to %s", len(blocks), peer_addr) | ||
|
|
||
| elif msg_type == "blocks": | ||
| received = payload["blocks"] | ||
| success, count = chain.add_blocks_bulk(received) | ||
|
|
||
| if success: | ||
| logger.info("✅ Chain synced: added %d blocks", count) | ||
| else: | ||
| logger.warning("❌ Chain sync failed after %d blocks", count) |
There was a problem hiding this comment.
History import assumes a state snapshot invariant that sync does not guarantee.
add_blocks_bulk() never replays transactions, so this path is only safe when the local state already matches the peer tip. But the sync branch above can reject the snapshot on Lines 120-122 and, when accepted, only merges missing accounts on Lines 130-136. The default --fund path already creates local state before connect, so a node can append historical blocks and still keep non-canonical balances/nonces. Either make initial sync replace local state atomically, or make bulk history import rebuild state instead of relying on sync.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@main.py` around lines 159 - 201, The issue: importing historical blocks via
the "blocks" message calls chain.add_blocks_bulk(received) which does not replay
transactions, so local state can diverge unless the node's state already matches
the peer tip; update the code and chain logic so bulk imports produce a correct
state: either (A) make the initial sync path (the code path that calls sync and
merges accounts) perform an atomic state replace when accepting a peer snapshot
(ensure sync returns a flag and you call a state_replace() /
chain.replace_state_atomic(...) before applying blocks), or (B) change
add_blocks_bulk (and/or introduce add_blocks_and_replay or
rebuild_state_from_history) to rebuild state by replaying transactions from the
earliest imported block (i.e., derive balances/nonces by applying each block's
transactions rather than only appending block headers). Locate references to
add_blocks_bulk, sync, chain.get_blocks_range, and the "blocks" message handler
and implement one of these fixes so historical imports do not rely on
preexisting local state.
| status_msg = _json.dumps({ | ||
| "type": "status", | ||
| "data": {"height": chain.height} | ||
| }) + "\n" | ||
| writer.write(status_msg.encode()) | ||
| await writer.drain() | ||
| logger.info("🔄 Sent state sync and status to new peer") |
There was a problem hiding this comment.
Capture sync and status from the same tip.
This status payload is built after the earlier await writer.drain(). If the local chain advances during that gap, the peer can receive accounts from height H and a status for H+1, then request one more block than the state snapshot represents. Snapshot both values before the first await.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@main.py` around lines 372 - 378, Capture the chain tip once before sending
any data so the state snapshot and the status message use the same height: read
chain.height (and the sync snapshot value used earlier) into local variables
(e.g., snapshot_height and status_height) before calling writer.write or
awaiting writer.drain, build the status_msg using that captured height instead
of accessing chain.height again, and then write/drain and log as before; update
references to status_msg, writer.write, writer.drain, and any earlier sync
variable to use these captured variables so both payloads reflect the same tip.
| def add_blocks_bulk(self, block_dicts: list) -> tuple: | ||
| """Add blocks validating chain linkage only. State relies on sync message.""" | ||
| added = 0 | ||
| for block_dict in block_dicts: | ||
| block = Block.from_dict(block_dict) | ||
| with self._lock: | ||
| try: | ||
| validate_block_link_and_hash(self.last_block, block) | ||
| except ValueError as exc: | ||
| logger.warning("Block %s rejected: %s", block.index, exc) | ||
| return False, added | ||
| self.chain.append(block) | ||
| added += 1 | ||
| return True, added No newline at end of file |
There was a problem hiding this comment.
Make add_blocks_bulk() all-or-nothing.
A failure on a later element returns (False, added) after the earlier blocks were already appended. That leaves the node at an intermediate height with no rollback path, and any retry now starts from the wrong tip. Validate the full batch against a temporary tip first, then extend self.chain once.
🔧 Proposed fix
def add_blocks_bulk(self, block_dicts: list) -> tuple:
"""Add blocks validating chain linkage only. State relies on sync message."""
- added = 0
- for block_dict in block_dicts:
- block = Block.from_dict(block_dict)
- with self._lock:
- try:
- validate_block_link_and_hash(self.last_block, block)
- except ValueError as exc:
- logger.warning("Block %s rejected: %s", block.index, exc)
- return False, added
- self.chain.append(block)
- added += 1
- return True, added
+ with self._lock:
+ tip = self.chain[-1]
+ new_blocks = []
+
+ for block_dict in block_dicts:
+ block = Block.from_dict(block_dict)
+ try:
+ validate_block_link_and_hash(tip, block)
+ except ValueError as exc:
+ logger.warning("Block %s rejected: %s", block.index, exc)
+ return False, 0
+ new_blocks.append(block)
+ tip = block
+
+ self.chain.extend(new_blocks)
+ return True, len(new_blocks)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@minichain/chain.py` around lines 100 - 113, The add_blocks_bulk function must
be atomic: first convert each block_dict to Block (using Block.from_dict) and
validate linkage sequentially against a temporary tip via
validate_block_link_and_hash without mutating self.chain or self.last_block; if
any validation fails return (False, 0). Only after the entire batch validates,
acquire self._lock and append all validated Block instances to self.chain
(updating added appropriately) so the extension is done under the lock as one
operation; reference add_blocks_bulk, Block.from_dict,
validate_block_link_and_hash, self.last_block, self.chain and self._lock when
making the change.
| def _validate_blocks_payload(self, payload): | ||
| if not isinstance(payload, dict): | ||
| return False | ||
| if set(payload) != {"blocks"}: | ||
| return False | ||
| blocks = payload.get("blocks") | ||
| if not isinstance(blocks, list): | ||
| return False | ||
| return all(isinstance(b, dict) for b in blocks) |
There was a problem hiding this comment.
Validate block entries inside blocks batches.
This currently accepts any list[dict]. A payload like {"blocks": [{"foo": 1}]} passes here and then raises later when Block.from_dict() indexes required keys. Reject malformed block dicts at the P2P boundary instead.
🔧 Proposed fix
def _validate_blocks_payload(self, payload):
if not isinstance(payload, dict):
return False
if set(payload) != {"blocks"}:
return False
blocks = payload.get("blocks")
if not isinstance(blocks, list):
return False
- return all(isinstance(b, dict) for b in blocks)
+ return all(self._validate_block_payload(block) for block in blocks)🧰 Tools
🪛 Ruff (0.15.7)
[warning] 231-231: Missing return type annotation for private function _validate_blocks_payload
(ANN202)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@minichain/p2p.py` around lines 231 - 239, The _validate_blocks_payload
currently only checks for dicts but lets malformed block dicts (e.g. {"foo":1})
through; update _validate_blocks_payload to iterate payload["blocks"] and for
each entry ensure it's a dict and contains the required keys expected by
Block.from_dict (e.g. the canonical block field names used by Block.from_dict or
a Block.REQUIRED_FIELDS constant if present) and basic types where applicable,
returning False if any block is missing keys or has wrong types; reference
_validate_blocks_payload and Block.from_dict when making the check so the P2P
boundary rejects malformed block entries before attempting to construct Block
instances.
Addressed Issues:
Fixes #83
Screenshots/Recordings:
Before (without Chain Sync):
New node joins after Block
#1is mined — only genesis block is synced:After (with Chain Sync):
New node receives full block history on connection:
Log output from new node:
Additional Notes:
What this PR does:
Adds a minimal 3-message handshake protocol for block history synchronization when new nodes join the network. Inspired by Bitcoin's Blocks-First IBD, simplified for MiniChain's minimal philosophy.
status— share current chain height on peer connectionget_blocks— request a range of blocks by heightblocks— respond with the requested serialized blocksDesign decisions:
getblocks → inv → getdata → block(4 messages) for hash-based fork detection and inventory filtering. MiniChain's linear chain makes height-based range requests sufficient, reducing the protocol to 3 messages.validate_block_link_and_hash()) only. Account state relies on the existingsyncmessage to avoid double-applying transactions.Files changed:
minichain/chain.py— addedheightproperty,get_blocks_range(),add_blocks_bulk()minichain/p2p.py— addedstatus,get_blocks,blocksmessage types with validatorsmain.py— added status exchange inon_peer_connected(), added message handling inhandler()Known limitation:
validate_block_link_and_hash().Tested scenarios:
AI Usage Disclosure:
Check one of the checkboxes below:
I have used the following AI models and tools: Claude (Anthropic) — used for design discussion and code review. All code was reviewed, tested, and verified locally before submission.
Checklist