Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add support for Blobstream API #3470

Open
wants to merge 26 commits into
base: main
Choose a base branch
from

Conversation

rach-id
Copy link
Member

@rach-id rach-id commented Jun 5, 2024

Closes #3361

@rach-id rach-id requested a review from evan-forbes June 5, 2024 11:17
@github-actions github-actions bot added the external Issues created by non node team members label Jun 5, 2024
@rach-id rach-id marked this pull request as draft June 5, 2024 11:17
github.com/celestiaorg/go-fraud v0.2.1
github.com/celestiaorg/go-header v0.6.1
github.com/celestiaorg/go-libp2p-messenger v0.2.0
github.com/celestiaorg/nmt v0.21.0
github.com/celestiaorg/nmt v0.21.1-0.20240602221058-a81b748b6f51 // TODO replace with release when ready
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will be replaced once we release nmt

}

// ProveShares generates a share proof for a share range.
// Note: queries the whole EDS to generate the proof.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should I create an issue for this?

nodebuilder/blobstream/service.go Outdated Show resolved Hide resolved
var subtreeRoots [][]byte
var dataCursor int
for _, proof := range nmtProofs {
// TODO: do we want directly use the default subtree root threshold or want to allow specifying which version to use?
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[note to reviewers]
do we want directly use the default subtree root threshold or want to allow specifying which version to use?

Comment on lines 513 to 533
// Note: this method is not part of the API. It will not be served by any endpoint, however,
// it can be called directly programmatically.
func ProveSubtreeRootToCommitment(subtreeRoots [][]byte, subtreeRootIndex uint64) (*ResultSubtreeRootToCommitmentProof, error) {
_, proofs := merkle.ProofsFromByteSlices(subtreeRoots)
return &ResultSubtreeRootToCommitmentProof{
SubtreeRootToCommitmentProof: SubtreeRootToCommitmentProof{
Proof: *proofs[subtreeRootIndex],
},
}, nil
}

// ProveShareToSubtreeRoot generates a share to subtree root inclusion proof
// Note: this method is not part of the API. It will not be served by any endpoint, however,
// it can be called directly programmatically.
func ProveShareToSubtreeRoot(shares [][]byte, shareIndex uint64) (*ResultShareToSubtreeRootProof, error) {
_, proofs := merkle.ProofsFromByteSlices(shares)
return &ResultShareToSubtreeRootProof{
ShareToSubtreeRootProof: ShareToSubtreeRootProof{
Proof: *proofs[shareIndex],
},
}, nil
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[note to reviewers]
these two methods are not used in the API but I see them as useful for people using the API. should we keep them here? move them somewhere special? create a utils.go or helpers.go and put them there?

})
}
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of using the generated mock. I decided to implement a custom mock here so that I can test a lot of combinations without having to worry about the mocked p2p network etc.

If you check the TestProveCommitmentAllCombinations test, you will see that it's kind of fuzzing tons of combinations.

nodebuilder/blobstream/types.go Outdated Show resolved Hide resolved
nodebuilder/blobstream/service.go Outdated Show resolved Hide resolved
nodebuilder/blobstream/service.go Show resolved Hide resolved
@rach-id rach-id self-assigned this Jun 5, 2024
@rach-id rach-id marked this pull request as ready for review June 5, 2024 11:27
@codecov-commenter
Copy link

codecov-commenter commented Jun 5, 2024

Codecov Report

Attention: Patch coverage is 64.69388% with 173 lines in your changes missing coverage. Please review.

Project coverage is 45.30%. Comparing base (2469e7a) to head (f8a3066).
Report is 106 commits behind head on main.

Files Patch % Lines
nodebuilder/blobstream/service.go 77.87% 42 Missing and 37 partials ⚠️
nodebuilder/blobstream/types.go 37.87% 32 Missing and 9 partials ⚠️
nodebuilder/blobstream/mocks/api.go 9.52% 38 Missing ⚠️
nodebuilder/blobstream/blobstream.go 0.00% 8 Missing ⚠️
nodebuilder/blobstream/module.go 0.00% 6 Missing ⚠️
nodebuilder/rpc/constructors.go 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3470      +/-   ##
==========================================
+ Coverage   44.83%   45.30%   +0.47%     
==========================================
  Files         265      278      +13     
  Lines       14620    15851    +1231     
==========================================
+ Hits         6555     7182     +627     
- Misses       7313     7828     +515     
- Partials      752      841      +89     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@renaynay renaynay added v0.15.0 Intended for v0.15.0 release and removed needs:triage labels Jun 13, 2024
Copy link

@insiderbyte insiderbyte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nodebuilder folder is using only for the modules creation. Maybe you should add a blobstream package and move all logic there?

nodebuilder/blobstream/blobstream.go Outdated Show resolved Hide resolved
nodebuilder/blobstream/blobstream.go Outdated Show resolved Hide resolved
nodebuilder/blobstream/types.go Show resolved Hide resolved
@vgonkivs vgonkivs added kind:feat Attached to feature PRs area:blobstream Related to the Blobstream feature and removed external Issues created by non node team members labels Jun 17, 2024
@ramin
Copy link
Collaborator

ramin commented Jun 18, 2024

@renaynay raised a good point i agree with that choosing a more generic name for this module and API, something other than "blobstream", might be better. As although it is for the purpose of supporting blobstream initially, it can and should serve a generic purpose.

Any suggestions for better names @renaynay @adlerjohn @liamsi?

@vgonkivs also mentioned potentially moving it into the blob module itself i believe

@liamsi
Copy link
Member

liamsi commented Jun 18, 2024

Any suggestions for better names @renaynay @adlerjohn @liamsi?

Yeah, I also raised this somewhere else. So ideally:

  • anything shares related would be folded into the shares "module"
  • hence, ProveShares and ProveCommitment could be moved into the shares module
  • even better: shares.GetProof would actually accommodate ProveShares
  • GetDataCommitment and DataRootInclusionProof are really very blobstream specific but they could live in a "blocks" module as they operate over a range of blocks/block heights. The closest we have for this is the header module.

Copy link
Member

@distractedm1nd distractedm1nd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great so far, I really appreciate the extensive debug logging and explicit error handling everywhere.

My comments are all about the representation of the responses and an alternative, more golang idiomatic/node-native way to structure the types in the package

Please also add example types in api/docgen/examples.go

nodebuilder/blobstream/blobstream.go Outdated Show resolved Hide resolved
api/rpc_test.go Outdated Show resolved Hide resolved
api/rpc/client/client.go Outdated Show resolved Hide resolved
nodebuilder/blobstream/types.go Outdated Show resolved Hide resolved
Comment on lines +20 to +42
// ResultDataCommitment is the API response containing a data
// commitment, aka data root tuple root.
type ResultDataCommitment struct {
DataCommitment bytes.HexBytes `json:"data_commitment"`
}

// ResultDataRootInclusionProof is the API response containing the binary merkle
// inclusion proof of a height to a data commitment.
type ResultDataRootInclusionProof struct {
Proof merkle.Proof `json:"proof"`
}

// ResultShareProof is the API response that contains a ShareProof.
// A share proof is a proof of a set of shares to the data root.
type ResultShareProof struct {
ShareProof types.ShareProof `json:"share_proof"`
}

// ResultCommitmentProof is an API response that contains a CommitmentProof.
// A commitment proof is a proof of a blob share commitment to the data root.
type ResultCommitmentProof struct {
CommitmentProof CommitmentProof `json:"commitment_proof"`
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should wrap these types in a Result struct if there are no methods defined on them.

What does make sense is to define type aliases, so the API return types are clear, and docs can be kept. Future methods on these types will also be cleaner if the aliases are already defined.

So instead of ResultDataCommitment, it would be something like

// DataCommitment is the API response containing a data
// commitment, aka data root tuple root.
type DataCommitment bytes.HexBytes

And the response will be read out directly from result instead of by accessing an inner JSON field

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am wrapping the types so that if we decided in the future to add new fields to the struct, we can do it in a backwards compatible way. Also, this is the way it is done in core: https://github.com/celestiaorg/celestia-core/blob/ff2bff4a29aec6851499e34b7d84349d8e774dd9/rpc/core/types/responses.go#L52-L55

Do you still prefer if I unwrap them?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hlib and I discussed this shortly, and we decided we would still prefer them to be unwrapped - we think backwards compatibility is liekly overkill here, or do you have a good example of something that may be added?

My thinking here is that these are actually pretty much low-level types that are processed directly in the smart contracts (as far as I imagine? haven't read them) and that they won't be doing any parsing out of extra additional data

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, but I guess the same reasoning lead to having the blob proof to be []*nmt.Proof and ended up needing to be a breaking change to change it to contain a proof to the data root.

From my experience, it's always good to separate the problematical types from the API ones. But I am fine with any. Let me know if you prefer them to be unwrapped and I'll do it

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think consensus it to unwrap please @rach-id. @distractedm1nd and @Wondertan do you agree?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cool, I'll unwrap as soon as I get confirmation.

nodebuilder/blobstream/service.go Outdated Show resolved Hide resolved
// DataRootTuple contains the data that will be used to create the QGB commitments.
// The commitments will be signed by orchestrators and submitted to an EVM chain via a relayer.
// For more information: https://github.com/celestiaorg/quantum-gravity-bridge/blob/master/src/DataRootTuple.sol
type DataRootTuple struct {
Copy link
Member

@distractedm1nd distractedm1nd Jun 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related to my other comments, wdyt about returning DataRootTuple directly, and then implementing:

MarshalBinary
UnmarshalBinary
MarshalJSON (just outputs the base64 of MarshalBinary output for example)
UnmarshalJSON

Then you won't need EncodeDataRootTuple I believe? And marshalling for the RPC ouput is handled automatically by fulfilling the interface.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see, I misinterpreted the original godoc of ResultDataCommitment. So GetDataCommitment should only return the datahash part

Fulfilling the encoding/binary interface is still probably a more idiomatic way to do this though

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does this mean? what needs to be changed?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means implementing MarshalBinary and UnmarshalBinary onto DataRootTuple to fulfill the encoding/binary interface, to replace EncodeDataRootTuple. I can gladly help out here or send a diff if it helps!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But we only need to encode it. do you mean fulfilling the interface and making the unmarshalling a no-op?

nodebuilder/blobstream/service.go Outdated Show resolved Hide resolved
nodebuilder/blobstream/service.go Show resolved Hide resolved
@rach-id
Copy link
Member Author

rach-id commented Jun 21, 2024

Nodebuilder folder is using only for the modules creation. Maybe you should add a blobstream package and move all logic there?

@insiderbyte where do you mean I should put the blobstream package? in the root?

@ramin
Copy link
Collaborator

ramin commented Jul 1, 2024

@rach-id so there is a general desire to follow @liamsi 's comment about decomposing this, such that instead of "blobstream" being a whole module, the new API functionality lives in each of their relative existing modules, with some functionality moved to another package (like the "blocks" one that he also suggested). There was a promise that everyone would chime in on reviews and suggestions of where to place things, I'll chase that - but we can work on together too if you want to suggest a diff layout...

@rach-id
Copy link
Member Author

rach-id commented Jul 1, 2024

@ramin Got you 👍 I was under the impression that this PR should be merged, and then we do the refactor. However, if we want to do it here, I'm fine with that.

The proposed structure would be:

  • header module:
    • GetDataCommitment
    • GetDataRootInclusionProof which should maybe be renamed to GetDataRootTupleInclusionProof because we're proving the data root tuple, i.e. (height, data_root) and not just the data root
  • blob module:
  • share module:
    • ProveShares

If everyone agrees, I can proceed with the refactor.

@liamsi
Copy link
Member

liamsi commented Jul 1, 2024

I like header better than block.

@distractedm1nd
Copy link
Member

distractedm1nd commented Jul 2, 2024

agreed with proposed structure

@renaynay renaynay removed the v0.15.0 Intended for v0.15.0 release label Jul 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:blobstream Related to the Blobstream feature kind:feat Attached to feature PRs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Blobstream API should be moved to node