-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: set rollup-da to near original repo. go mod tidy #165
base: main
Are you sure you want to change the base?
Conversation
Noticed the blobs are not being properly decoded on int test. This is most likely the reason why: Nuffle-Labs/data-availability@f771f36. Now the blobs are only data and the blobs we were using prior to that are |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned above.
3a91f12
to
e3b1eff
Compare
feat: commitment computed on indexer side. Should probably be done in contract itself
The issue is fixed by having commitment computed on the indexer side. I assume near-da contract will be updated in the future and maybe will include some commitment. Will see. |
|
||
pub(crate) fn try_transform_payload(payload: &[u8]) -> Result<Vec<u8>> { | ||
let submit_request: SubmitRequest = borsh::de::from_slice(payload)?; | ||
let commited_blob: CommitedBlob = Blob::new(submit_request.data).into(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
commited -> committed
Overall everything seems great, what I don't like too much about this is that it's breaking, and integrating this would require full coordination between nodes and relayer. Even though we are under development, it'd be a lot better if we just try decoding it as a legacy blob and, if it doesn't work, we decode it as the current blob format, also keeping compatibility in the MQ data. |
|
Sadly with a running environment we need to worry about compatibility. It's not easy to coordinate updates, so breaking updates should be either scheduled or done in a way that's smooth - e.g. once every operator is running the backwards-compatible version, we can introduce the new version and nothing breaks. For this I do think we can do it with not much of an issue, so I'd prefer the latter approach. |
It’s not only about this PR but also about other breaking changes like this. If you choose migrations for when they can be avoided you will have a spaghetti code from one version to another. Right now there’s no reason for that. This pr can be postponed until scheduled update and then merged |
I agree, I don't think this one is urgent. Though my understanding is there can be a specific judgment on whether something needs to be breaking or not or how big the impact of a compatibility layer is. If it does result in a big rework, then not worth it, otherwise being a small adjustment it's better to advance with the feature and deprecate - then drop backward compatibility later. |
This change is