-
Notifications
You must be signed in to change notification settings - Fork 250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deal with case where data becomes available after time out #321
Comments
Yes but a) it will retry and b) if it is triggered to look for a new height, it will ofc revisit past heights that did not pass validation. In case of DAS, this means the current light client would also recheck availability.
As mentioned above, if the light client is requested to update to a new height, it will check from the last height that passed all validation (incl. availability) to the latest height. It does not matter if the heights in between previously failed or not. The current implementation always checks for consensus first, and then for availability. Even if we switched to a pure p2p light client instead of the current RPC / proxy one, this behavior is baked into the verification logic. Does this mean, we can close this issue? Or is there more to it? |
What about for Tendermint nodes, do they also have this behaviour? |
Need to double-check. Updated the opening comment with actionable tasks. It's worth noting that we don't yet have integrated sampling in tendermint fullnodes. So I assume the question is, what happens if a fullnodes rejects a block at height h but receives an otherwise valid block with height h+d. It would help, if we had clarity on what a lazyledger fullnode actually is. Currently, in the specs:
|
Putting my thoughts down:
|
Yeah, I'm pretty sure when @musalbas was asking if Tendermint fullnodes have the same behaviour as I described for the light clients, he had partial nodes in mind. The wording is a bit confusing here. My understanding of how we want to change Tendermint fullnodes, is described here: #384 (reply in thread) |
I'm closing this issue: For lazyledger-core there are no changes necessary. We might have to revisit this in the context of light clients and partial nodes. But for me it seems clear how to proceed from celestiaorg/celestia-specs#179 |
In the current implementation, the data availability check is failed if data the sampling timeout is exceeded.
We need to deal with the case where (i) the block data becomes available after the timeout and (ii) the block is part of the canonical chain that has consensus. In this case, the block should be treated as available, otherwise we will end up with a chain split with newly bootstrapped clients who see the block as available.
Perhaps one way of dealing with this could be, if a new block is received that is part of the canonical chain that has consensus, but the previous block was seen as unavailable, the node should re-validate the data availability of the previous block, as a pre-condition to accepting that new block.
Action Items
The text was updated successfully, but these errors were encountered: