-
Notifications
You must be signed in to change notification settings - Fork 897
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add support for Blobstream API #3470
base: main
Are you sure you want to change the base?
Conversation
github.com/celestiaorg/go-fraud v0.2.1 | ||
github.com/celestiaorg/go-header v0.6.1 | ||
github.com/celestiaorg/go-libp2p-messenger v0.2.0 | ||
github.com/celestiaorg/nmt v0.21.0 | ||
github.com/celestiaorg/nmt v0.21.1-0.20240602221058-a81b748b6f51 // TODO replace with release when ready |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will be replaced once we release nmt
} | ||
|
||
// ProveShares generates a share proof for a share range. | ||
// Note: queries the whole EDS to generate the proof. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should I create an issue for this?
nodebuilder/blobstream/service.go
Outdated
var subtreeRoots [][]byte | ||
var dataCursor int | ||
for _, proof := range nmtProofs { | ||
// TODO: do we want directly use the default subtree root threshold or want to allow specifying which version to use? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[note to reviewers]
do we want directly use the default subtree root threshold or want to allow specifying which version to use?
nodebuilder/blobstream/service.go
Outdated
// Note: this method is not part of the API. It will not be served by any endpoint, however, | ||
// it can be called directly programmatically. | ||
func ProveSubtreeRootToCommitment(subtreeRoots [][]byte, subtreeRootIndex uint64) (*ResultSubtreeRootToCommitmentProof, error) { | ||
_, proofs := merkle.ProofsFromByteSlices(subtreeRoots) | ||
return &ResultSubtreeRootToCommitmentProof{ | ||
SubtreeRootToCommitmentProof: SubtreeRootToCommitmentProof{ | ||
Proof: *proofs[subtreeRootIndex], | ||
}, | ||
}, nil | ||
} | ||
|
||
// ProveShareToSubtreeRoot generates a share to subtree root inclusion proof | ||
// Note: this method is not part of the API. It will not be served by any endpoint, however, | ||
// it can be called directly programmatically. | ||
func ProveShareToSubtreeRoot(shares [][]byte, shareIndex uint64) (*ResultShareToSubtreeRootProof, error) { | ||
_, proofs := merkle.ProofsFromByteSlices(shares) | ||
return &ResultShareToSubtreeRootProof{ | ||
ShareToSubtreeRootProof: ShareToSubtreeRootProof{ | ||
Proof: *proofs[shareIndex], | ||
}, | ||
}, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[note to reviewers]
these two methods are not used in the API but I see them as useful for people using the API. should we keep them here? move them somewhere special? create a utils.go
or helpers.go
and put them there?
}) | ||
} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of using the generated mock. I decided to implement a custom mock here so that I can test a lot of combinations without having to worry about the mocked p2p network etc.
If you check the TestProveCommitmentAllCombinations
test, you will see that it's kind of fuzzing tons of combinations.
# Conflicts: # go.mod # go.sum
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3470 +/- ##
==========================================
+ Coverage 44.83% 45.30% +0.47%
==========================================
Files 265 278 +13
Lines 14620 15851 +1231
==========================================
+ Hits 6555 7182 +627
- Misses 7313 7828 +515
- Partials 752 841 +89 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nodebuilder folder is using only for the modules creation. Maybe you should add a blobstream package and move all logic there?
@renaynay raised a good point i agree with that choosing a more generic name for this module and API, something other than "blobstream", might be better. As although it is for the purpose of supporting blobstream initially, it can and should serve a generic purpose. Any suggestions for better names @renaynay @adlerjohn @liamsi? @vgonkivs also mentioned potentially moving it into the blob module itself i believe |
Yeah, I also raised this somewhere else. So ideally:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great so far, I really appreciate the extensive debug logging and explicit error handling everywhere.
My comments are all about the representation of the responses and an alternative, more golang idiomatic/node-native way to structure the types in the package
Please also add example types in api/docgen/examples.go
// ResultDataCommitment is the API response containing a data | ||
// commitment, aka data root tuple root. | ||
type ResultDataCommitment struct { | ||
DataCommitment bytes.HexBytes `json:"data_commitment"` | ||
} | ||
|
||
// ResultDataRootInclusionProof is the API response containing the binary merkle | ||
// inclusion proof of a height to a data commitment. | ||
type ResultDataRootInclusionProof struct { | ||
Proof merkle.Proof `json:"proof"` | ||
} | ||
|
||
// ResultShareProof is the API response that contains a ShareProof. | ||
// A share proof is a proof of a set of shares to the data root. | ||
type ResultShareProof struct { | ||
ShareProof types.ShareProof `json:"share_proof"` | ||
} | ||
|
||
// ResultCommitmentProof is an API response that contains a CommitmentProof. | ||
// A commitment proof is a proof of a blob share commitment to the data root. | ||
type ResultCommitmentProof struct { | ||
CommitmentProof CommitmentProof `json:"commitment_proof"` | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should wrap these types in a Result struct if there are no methods defined on them.
What does make sense is to define type aliases, so the API return types are clear, and docs can be kept. Future methods on these types will also be cleaner if the aliases are already defined.
So instead of ResultDataCommitment, it would be something like
// DataCommitment is the API response containing a data
// commitment, aka data root tuple root.
type DataCommitment bytes.HexBytes
And the response will be read out directly from result instead of by accessing an inner JSON field
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am wrapping the types so that if we decided in the future to add new fields to the struct, we can do it in a backwards compatible way. Also, this is the way it is done in core: https://github.com/celestiaorg/celestia-core/blob/ff2bff4a29aec6851499e34b7d84349d8e774dd9/rpc/core/types/responses.go#L52-L55
Do you still prefer if I unwrap them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hlib and I discussed this shortly, and we decided we would still prefer them to be unwrapped - we think backwards compatibility is liekly overkill here, or do you have a good example of something that may be added?
My thinking here is that these are actually pretty much low-level types that are processed directly in the smart contracts (as far as I imagine? haven't read them) and that they won't be doing any parsing out of extra additional data
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, but I guess the same reasoning lead to having the blob proof to be []*nmt.Proof
and ended up needing to be a breaking change to change it to contain a proof to the data root.
From my experience, it's always good to separate the problematical types from the API ones. But I am fine with any. Let me know if you prefer them to be unwrapped and I'll do it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think consensus it to unwrap please @rach-id. @distractedm1nd and @Wondertan do you agree?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool, I'll unwrap as soon as I get confirmation.
// DataRootTuple contains the data that will be used to create the QGB commitments. | ||
// The commitments will be signed by orchestrators and submitted to an EVM chain via a relayer. | ||
// For more information: https://github.com/celestiaorg/quantum-gravity-bridge/blob/master/src/DataRootTuple.sol | ||
type DataRootTuple struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Related to my other comments, wdyt about returning DataRootTuple directly, and then implementing:
MarshalBinary
UnmarshalBinary
MarshalJSON (just outputs the base64 of MarshalBinary output for example)
UnmarshalJSON
Then you won't need EncodeDataRootTuple I believe? And marshalling for the RPC ouput is handled automatically by fulfilling the interface.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I see, I misinterpreted the original godoc of ResultDataCommitment
. So GetDataCommitment
should only return the datahash part
Fulfilling the encoding/binary interface is still probably a more idiomatic way to do this though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what does this mean? what needs to be changed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This means implementing MarshalBinary
and UnmarshalBinary
onto DataRootTuple
to fulfill the encoding/binary interface, to replace EncodeDataRootTuple
. I can gladly help out here or send a diff if it helps!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But we only need to encode it. do you mean fulfilling the interface and making the unmarshalling a no-op?
@insiderbyte where do you mean I should put the blobstream package? in the root? |
@rach-id so there is a general desire to follow @liamsi 's comment about decomposing this, such that instead of "blobstream" being a whole module, the new API functionality lives in each of their relative existing modules, with some functionality moved to another package (like the "blocks" one that he also suggested). There was a promise that everyone would chime in on reviews and suggestions of where to place things, I'll chase that - but we can work on together too if you want to suggest a diff layout... |
@ramin Got you 👍 I was under the impression that this PR should be merged, and then we do the refactor. However, if we want to do it here, I'm fine with that. The proposed structure would be:
If everyone agrees, I can proceed with the refactor. |
I like header better than block. |
agreed with proposed structure |
Closes #3361