Perf: LogUp* optimization for range checks and precomputed tables#1703
Perf: LogUp* optimization for range checks and precomputed tables#1703
Conversation
There was a problem hiding this comment.
Pull request overview
This PR introduces a LogUp* (“indexed lookup”) variant of the log-derivative argument to reduce commitment size/constraints for lookups where query values are derivable from a constant table and an index (notably range checks and precomputed byte ops).
Changes:
- Adds
countIndexedHint,BuildIndexedConstant, andBuildIndexedPrecomputedtologderivarg. - Updates rangecheck commitment flow to use
BuildIndexedConstantfor identity tables. - Updates precomputed binary-op lookups to track indices and use
BuildIndexedPrecomputed, plus adds/extends tests and constraint-count comparisons.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| std/rangecheck/rangecheck_commit.go | Switches rangecheck lookup to BuildIndexedConstant (LogUp*) |
| std/internal/logderivarg/logderivarg.go | Adds indexed-hint + new LogUp* builders |
| std/internal/logderivarg/logderivarg_test.go | Adds unit tests + constraint comparison for indexed builders |
| std/internal/logderivprecomp/logderivprecomp.go | Stores packed indices and routes to BuildIndexedPrecomputed |
| std/internal/logderivprecomp/logderivprecomp_test.go | Adds constraint comparison test for precomputed XOR lookups |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
I don't think the optimisation is valid:
- either we should need to know the indices ahead of time
- or the queries should be committed to previously
But none of these conditions hold imo. The values we query are provided by the hint, so they are not static (the approach would work in fixed-permutation argument though). In case of range checks we usually want to range check the values coming from a hint and in case of logderivprecomp the values come from precomputation hint.
And the values we're querying are neither committed to previously - for example in case of range checks they also come from the hint when we split larger limbs into smaller ones (64-bit to 4 16-bit limbs). We still add the constraints that the decompoistion is correct (v = v0 + v1 << 16 + v2 << 32 + v3 << 48), but it only reduces the search space to find valid solution.
See longer POC in DM. I'm not sure how to fix it here though, we still would have to commit to the query values. The only use case I see is when we want to implement fixed-permutation argument. This possibly has a use case i.e. we're doing scalar multiplication between constant scalar and variable point (then we could implement as mux-lookup). Perhaps also in final-exp computation in pairing? There we have a fixed exponent.
Indeed the pqper assumes indices are committed before the challenge, which does not seem to be in gnark's BSB22 workflow. A potential fix would be to make the BSB22 challenge derivation incorporate L, R, O (or equivalently, derive it from the Fiat-Shamir transcript after L, R, O are committed)? But that would require changing gnark's proving protocol flow, not just the lookup construction. |
Summary
This PR implements the LogUp* optimization from eprint 2025/946 for indexed lookups in gnark. The key insight is that for lookups where
query[i] = table[index[i]], we don't need to commit to query values since they equal table values at known indices.This optimization applies to:
table[i] = i(identity table) and limb values are indicestable[x|y<<8] = f(x,y)Constraint Improvements
Direct LogUp* Savings
The constraint savings equal exactly the number of queries (limbs):
EVM Precompiles (SCS/PLONK)
PLONK recursion (SCS/PLONK)
N.B.: the native PLONK recursion creates an emulated field for scalar operations, hence the ~3% saving.
Hash functions (SCS/PLONK)
benchmarks (64 bytes input):
N.B.: The savings are more modest for hash functions (~2-4%) compared to EVM precompiles (~12-14%) because:
Technical Details
LogUp Equation Reformulation
Original LogUp:
LogUp* for indexed lookups:
New API
Files Changed
std/internal/logderivarg/logderivarg.gocountIndexedHint,BuildIndexedConstant,BuildIndexedPrecomputedstd/internal/logderivarg/logderivarg_test.gostd/internal/logderivprecomp/logderivprecomp.goBuildIndexedPrecomputedstd/internal/logderivprecomp/logderivprecomp_test.gostd/rangecheck/rangecheck_commit.goBuildIndexedConstantTesting
go test ./std/...)BuildIndexedConstantandBuildIndexedPrecomputedChecklist
golangci-lintdoes not output errors locallyNote
Medium Risk
Touches core proving constraint logic (log-derivative lookups, rangecheck commitments, and precomputed table verification), so mistakes could silently weaken soundness or break circuits, though changes are scoped and covered by new tests.
Overview
Implements the LogUp* optimization for indexed lookups by adding
countIndexedHintplus new buildersBuildIndexedConstant(identity tables) andBuildIndexedPrecomputed(constant precomputed tables) that commit only multiplicities, not per-query values.Updates
rangecheckcommitments andlogderivprecomp(byte XOR/AND/OR-style tables) to use the indexed builders by tracking query indices alongside packed query values, and adds targeted tests/bench-style constraint comparisons plus updatedinternal/stats/latest_stats.csvto reflect reduced PLONK/SCS constraint counts.Written by Cursor Bugbot for commit 4e0d1bf. This will update automatically on new commits. Configure here.