-
Notifications
You must be signed in to change notification settings - Fork 282
Estimate compression ratio at rollupFee for Feynman #1197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: l1data-to-rollupfee
Are you sure you want to change the base?
Conversation
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
9d4f5bc
to
4c1dc4d
Compare
rollup/fees/rollup_fee.go
Outdated
// Default compression ratio is 1.0 (no compression) | ||
compressionRatioInt := big.NewInt(rcfg.Precision.Int64()) | ||
|
||
if len(data) != 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this zero check needed? This should never happen, plus this error case is covered inside CompressScrollBatchBytes
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, it is already in CompressScrollBatchBytes
. Removed.
rollup/fees/rollup_fee.go
Outdated
compressionRatioInt := big.NewInt(rcfg.Precision.Int64()) | ||
|
||
if len(data) != 0 { | ||
compressedBytes, err := zstd.CompressScrollBatchBytes(data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fyi this will add 128 byte overhead (compressBufferOverhead
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we just subtract these 128 bytes from the result then and continue using our own zstd implementation (probably in a wrapper function)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That could work, though it leaks implementation details of da-codec
.
rollup/fees/rollup_fee.go
Outdated
compressionRatioInt := big.NewInt(rcfg.Precision.Int64()) | ||
|
||
if len(data) != 0 { | ||
compressedBytes, err := zstd.CompressScrollBatchBytes(data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can just use any go zstd lib, don't need to use our da-codec
, what do you think? @colinlyguo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are the motivations to not use our own zstd lib? the 128 bytes overhead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please sync with Colin, this is something we want to do in other codebases too. Circuits can now support the whole zstd spec so we don't need to hardcode using a specific implementation (true with some caveats).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. We can use any Go Zstd lib. And a reminder is that the spec can also be used in Reth, for constructing historical blocks from DA.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/DataDog/zstd, https://github.com/valyala/gozstd. These libraries look good (listed in https://facebook.github.io/zstd/), the implementation is the official zstd C implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A historically symbol conflict issue: https://scrollco.slack.com/archives/C02PKAQ2430/p1720694878378479
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pure Go implementation, such as https://github.com/klauspost/compress/tree/master/zstd#zstd won't have any issues with C's global symbol conflicts when loading multiple zstd libraries. But I'm afraid Rust cannot have an identical implementation to reproduce the same compression ratio.
Also, pls double-check if a version of zstd running on different operating systems will produce the same results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it and thanks!
rollup/fees/rollup_fee.go
Outdated
compressionRatioInt.Mul(compressedSize, rcfg.Precision) | ||
compressionRatioInt.Div(compressionRatioInt, txSize) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While I believe it's always true that compressedSize <= size
, I'm not 100% convinced this is the case. I suggest you add a check to cover this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch, also not 100% sure. Done
rollup/fees/rollup_fee.go
Outdated
// compression_ratio(tx) = 1 (placeholder, scaled to match scalars precision) | ||
compressionRatio := big.NewInt(rcfg.Precision.Int64()) | ||
// Default compression ratio is 1.0 (no compression) | ||
compressionRatioInt := big.NewInt(rcfg.Precision.Int64()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why rename to compressionRatioInt
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair, removed.
Waiting for a decision on which |
d0e730b
to
ce2498e
Compare
b83f7dc
to
e3c5bd7
Compare
Following the discussion here, I reverted the use of a standard go zstd library in favour for the da-codec's version. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please revert all the whitespace changes. Next to being a pain when reviewing it also increases the diff.
Good common practice is to check files and their content before committing to avoid such whitespace changes from eg linter and to make sure what you're committing is actually sensible.
compressionRatio := big.NewInt(rcfg.Precision.Int64()) | ||
|
||
compressedBytes, err := zstd.CompressScrollBatchBytes(data) | ||
if err != nil { | ||
log.Error("Batch compression failed, using 1.0 compression ratio", "err", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
adjust the log message and would be good to also add the tx hash to the log message here to make it more meaningful.
compressionRatio := big.NewInt(rcfg.Precision.Int64()) | ||
|
||
compressedBytes, err := zstd.CompressScrollBatchBytes(data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
might still be good to use the zstd version that comes with da-codec. maybe we need to introduce a new function there to get rid of the overhead
1. Purpose or design rationale of this PR
See #1196 for more.
2. PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
3. Deployment tag versioning
Has the version in
params/version.go
been updated?4. Breaking change label
Does this PR have the
breaking-change
label?