Skip to content

Conversation

@twoeths
Copy link
Contributor

@twoeths twoeths commented Jan 15, 2026

Motivation

  • after we migrate to the native state-transition, we're not able to query EpochCache methods anymore

Description

  • use our ShufflingCache to query for these methods instead, the list includes:
    • getIndexedAttestation()
    • getAttestingIndices()
    • getBeaconCommittee()
    • getBeaconCommittees()

Closes #8655

blocked by #8721

@twoeths twoeths changed the title Te/query shuffling cache refactor: query ShufflingCache for shuffling data Jan 15, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @twoeths, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the handling of beacon committee and attestation data queries. It centralizes the underlying logic into a new shufflingUtils module and routes all such operations through the ShufflingCache. This change is crucial for maintaining system integrity and performance after migrating to a native state-transition model, ensuring that shuffling-dependent data is accessed consistently and efficiently.

Highlights

  • Refactored Shuffling Query Logic: The core logic for querying getIndexedAttestation, getAttestingIndices, getBeaconCommittee, and getBeaconCommittees has been extracted from EpochCache into a new utility file, shufflingUtils.ts.
  • Enhanced ShufflingCache: The ShufflingCache now exposes these refactored methods, establishing itself as the primary interface for all shuffling-related data queries, addressing the limitation of querying EpochCache directly after native state-transition migration.
  • Updated Call Sites: All internal components that previously relied on EpochCache for shuffling data, such as block import, verification, and attestation pool operations, have been updated to utilize the new ShufflingCache methods.
  • Error Handling Migration: Error handling for committee index out-of-range scenarios has been migrated from EpochCacheError to a new ShufflingError type, aligning with the new architecture.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a significant refactoring to query shuffling data from ShufflingCache instead of EpochCache. The changes are spread across multiple packages and appear to be well-implemented, aligning with the goal of migrating away from EpochCache methods. I've included a couple of suggestions to enhance code quality: one to address code duplication by introducing a helper method, and another to correct a minor typo in an error code for better consistency. Overall, the changes look good.

@github-actions
Copy link
Contributor

github-actions bot commented Jan 15, 2026

Performance Report

✔️ no performance regression detected

Full benchmark results
Benchmark suite Current: 067b889 Previous: 3932675 Ratio
getPubkeys - index2pubkey - req 1000 vs - 250000 vc 1.3670 ms/op 1.1555 ms/op 1.18
getPubkeys - validatorsArr - req 1000 vs - 250000 vc 44.853 us/op 35.208 us/op 1.27
BLS verify - blst 956.99 us/op 1.0021 ms/op 0.95
BLS verifyMultipleSignatures 3 - blst 1.3986 ms/op 1.2403 ms/op 1.13
BLS verifyMultipleSignatures 8 - blst 2.5737 ms/op 2.4976 ms/op 1.03
BLS verifyMultipleSignatures 32 - blst 6.3872 ms/op 6.1104 ms/op 1.05
BLS verifyMultipleSignatures 64 - blst 12.174 ms/op 11.224 ms/op 1.08
BLS verifyMultipleSignatures 128 - blst 20.604 ms/op 17.336 ms/op 1.19
BLS deserializing 10000 signatures 761.20 ms/op 682.16 ms/op 1.12
BLS deserializing 100000 signatures 7.7942 s/op 6.8837 s/op 1.13
BLS verifyMultipleSignatures - same message - 3 - blst 924.55 us/op 1.1910 ms/op 0.78
BLS verifyMultipleSignatures - same message - 8 - blst 1.0768 ms/op 1.6046 ms/op 0.67
BLS verifyMultipleSignatures - same message - 32 - blst 1.7820 ms/op 1.6240 ms/op 1.10
BLS verifyMultipleSignatures - same message - 64 - blst 2.6354 ms/op 2.5621 ms/op 1.03
BLS verifyMultipleSignatures - same message - 128 - blst 4.4171 ms/op 4.1723 ms/op 1.06
BLS aggregatePubkeys 32 - blst 22.061 us/op 21.863 us/op 1.01
BLS aggregatePubkeys 128 - blst 78.257 us/op 67.433 us/op 1.16
getSlashingsAndExits - default max 73.600 us/op 72.766 us/op 1.01
getSlashingsAndExits - 2k 344.75 us/op 312.86 us/op 1.10
isKnown best case - 1 super set check 215.00 ns/op 198.00 ns/op 1.09
isKnown normal case - 2 super set checks 211.00 ns/op 193.00 ns/op 1.09
isKnown worse case - 16 super set checks 215.00 ns/op 185.00 ns/op 1.16
InMemoryCheckpointStateCache - add get delete 2.9470 us/op 2.3780 us/op 1.24
validate api signedAggregateAndProof - struct 2.4346 ms/op 1.5397 ms/op 1.58
validate gossip signedAggregateAndProof - struct 1.7332 ms/op 1.7114 ms/op 1.01
batch validate gossip attestation - vc 640000 - chunk 32 118.29 us/op 115.43 us/op 1.02
batch validate gossip attestation - vc 640000 - chunk 64 114.66 us/op 101.61 us/op 1.13
batch validate gossip attestation - vc 640000 - chunk 128 104.42 us/op 115.85 us/op 0.90
batch validate gossip attestation - vc 640000 - chunk 256 101.26 us/op 90.741 us/op 1.12
bytes32 toHexString 410.00 ns/op 363.00 ns/op 1.13
bytes32 Buffer.toString(hex) 280.00 ns/op 236.00 ns/op 1.19
bytes32 Buffer.toString(hex) from Uint8Array 366.00 ns/op 324.00 ns/op 1.13
bytes32 Buffer.toString(hex) + 0x 286.00 ns/op 231.00 ns/op 1.24
Object access 1 prop 0.13400 ns/op 0.14400 ns/op 0.93
Map access 1 prop 0.13900 ns/op 0.11500 ns/op 1.21
Object get x1000 5.8190 ns/op 5.2060 ns/op 1.12
Map get x1000 0.46700 ns/op 0.40100 ns/op 1.16
Object set x1000 32.391 ns/op 28.694 ns/op 1.13
Map set x1000 22.885 ns/op 19.561 ns/op 1.17
Return object 10000 times 0.25080 ns/op 0.22750 ns/op 1.10
Throw Error 10000 times 4.5105 us/op 4.0092 us/op 1.13
toHex 160.32 ns/op 154.33 ns/op 1.04
Buffer.from 143.09 ns/op 122.36 ns/op 1.17
shared Buffer 106.25 ns/op 77.600 ns/op 1.37
fastMsgIdFn sha256 / 200 bytes 2.1550 us/op 1.8090 us/op 1.19
fastMsgIdFn h32 xxhash / 200 bytes 286.00 ns/op 210.00 ns/op 1.36
fastMsgIdFn h64 xxhash / 200 bytes 334.00 ns/op 258.00 ns/op 1.29
fastMsgIdFn sha256 / 1000 bytes 8.2440 us/op 5.8960 us/op 1.40
fastMsgIdFn h32 xxhash / 1000 bytes 298.00 ns/op 288.00 ns/op 1.03
fastMsgIdFn h64 xxhash / 1000 bytes 377.00 ns/op 309.00 ns/op 1.22
fastMsgIdFn sha256 / 10000 bytes 55.534 us/op 50.587 us/op 1.10
fastMsgIdFn h32 xxhash / 10000 bytes 1.4360 us/op 1.6040 us/op 0.90
fastMsgIdFn h64 xxhash / 10000 bytes 1.0620 us/op 884.00 ns/op 1.20
100 bytes - compress - snappyjs 1.6614 us/op 1.2301 us/op 1.35
100 bytes - compress - snappy 1.2957 us/op 1.1236 us/op 1.15
100 bytes - compress - snappy-wasm 667.36 ns/op 757.56 ns/op 0.88
100 bytes - compress - snappy-wasm - prealloc 1.7394 us/op 1.2730 us/op 1.37
200 bytes - compress - snappyjs 1.9095 us/op 1.4327 us/op 1.33
200 bytes - compress - snappy 1.3947 us/op 1.3350 us/op 1.04
200 bytes - compress - snappy-wasm 909.41 ns/op 1.3638 us/op 0.67
200 bytes - compress - snappy-wasm - prealloc 1.2036 us/op 1.5149 us/op 0.79
300 bytes - compress - snappyjs 2.1352 us/op 2.2285 us/op 0.96
300 bytes - compress - snappy 1.5487 us/op 2.7233 us/op 0.57
300 bytes - compress - snappy-wasm 1.1142 us/op 772.97 ns/op 1.44
300 bytes - compress - snappy-wasm - prealloc 1.8437 us/op 1.9126 us/op 0.96
400 bytes - compress - snappyjs 2.6849 us/op 1.9955 us/op 1.35
400 bytes - compress - snappy 1.4544 us/op 1.4148 us/op 1.03
400 bytes - compress - snappy-wasm 1.0272 us/op 832.43 ns/op 1.23
400 bytes - compress - snappy-wasm - prealloc 1.6470 us/op 1.0762 us/op 1.53
500 bytes - compress - snappyjs 2.4756 us/op 2.7663 us/op 0.89
500 bytes - compress - snappy 1.6640 us/op 1.3576 us/op 1.23
500 bytes - compress - snappy-wasm 1.0564 us/op 1.1535 us/op 0.92
500 bytes - compress - snappy-wasm - prealloc 1.6617 us/op 1.1679 us/op 1.42
1000 bytes - compress - snappyjs 4.8587 us/op 4.6148 us/op 1.05
1000 bytes - compress - snappy 1.6715 us/op 1.5787 us/op 1.06
1000 bytes - compress - snappy-wasm 1.9016 us/op 1.9577 us/op 0.97
1000 bytes - compress - snappy-wasm - prealloc 2.0467 us/op 1.9873 us/op 1.03
10000 bytes - compress - snappyjs 27.660 us/op 26.897 us/op 1.03
10000 bytes - compress - snappy 23.652 us/op 34.579 us/op 0.68
10000 bytes - compress - snappy-wasm 19.307 us/op 31.705 us/op 0.61
10000 bytes - compress - snappy-wasm - prealloc 25.190 us/op 28.214 us/op 0.89
100 bytes - uncompress - snappyjs 787.71 ns/op 764.63 ns/op 1.03
100 bytes - uncompress - snappy 1.2920 us/op 1.0810 us/op 1.20
100 bytes - uncompress - snappy-wasm 687.37 ns/op 935.16 ns/op 0.74
100 bytes - uncompress - snappy-wasm - prealloc 908.88 ns/op 1.1398 us/op 0.80
200 bytes - uncompress - snappyjs 1.0450 us/op 1.2325 us/op 0.85
200 bytes - uncompress - snappy 1.4227 us/op 1.1595 us/op 1.23
200 bytes - uncompress - snappy-wasm 886.10 ns/op 711.39 ns/op 1.25
200 bytes - uncompress - snappy-wasm - prealloc 1.0472 us/op 1.6565 us/op 0.63
300 bytes - uncompress - snappyjs 1.2135 us/op 1.4453 us/op 0.84
300 bytes - uncompress - snappy 1.6611 us/op 2.6633 us/op 0.62
300 bytes - uncompress - snappy-wasm 1.0014 us/op 1.1614 us/op 0.86
300 bytes - uncompress - snappy-wasm - prealloc 1.1432 us/op 1.5149 us/op 0.75
400 bytes - uncompress - snappyjs 2.5843 us/op 1.5339 us/op 1.68
400 bytes - uncompress - snappy 2.1119 us/op 1.4614 us/op 1.45
400 bytes - uncompress - snappy-wasm 822.07 ns/op 835.90 ns/op 0.98
400 bytes - uncompress - snappy-wasm - prealloc 1.2860 us/op 1.3849 us/op 0.93
500 bytes - uncompress - snappyjs 1.6722 us/op 2.6247 us/op 0.64
500 bytes - uncompress - snappy 1.4440 us/op 1.7607 us/op 0.82
500 bytes - uncompress - snappy-wasm 1.1309 us/op 842.62 ns/op 1.34
500 bytes - uncompress - snappy-wasm - prealloc 1.5173 us/op 1.3347 us/op 1.14
1000 bytes - uncompress - snappyjs 2.8467 us/op 2.6317 us/op 1.08
1000 bytes - uncompress - snappy 1.8686 us/op 1.5517 us/op 1.20
1000 bytes - uncompress - snappy-wasm 1.3882 us/op 1.3143 us/op 1.06
1000 bytes - uncompress - snappy-wasm - prealloc 1.2564 us/op 1.3680 us/op 0.92
10000 bytes - uncompress - snappyjs 20.829 us/op 21.624 us/op 0.96
10000 bytes - uncompress - snappy 28.804 us/op 23.060 us/op 1.25
10000 bytes - uncompress - snappy-wasm 17.507 us/op 31.355 us/op 0.56
10000 bytes - uncompress - snappy-wasm - prealloc 17.832 us/op 28.445 us/op 0.63
send data - 1000 256B messages 20.223 ms/op 16.290 ms/op 1.24
send data - 1000 512B messages 19.435 ms/op 18.578 ms/op 1.05
send data - 1000 1024B messages 24.643 ms/op 24.298 ms/op 1.01
send data - 1000 1200B messages 27.331 ms/op 29.455 ms/op 0.93
send data - 1000 2048B messages 32.530 ms/op 23.418 ms/op 1.39
send data - 1000 4096B messages 56.659 ms/op 58.794 ms/op 0.96
send data - 1000 16384B messages 98.846 ms/op 115.70 ms/op 0.85
send data - 1000 65536B messages 414.01 ms/op 301.27 ms/op 1.37
enrSubnets - fastDeserialize 64 bits 961.00 ns/op 1.1210 us/op 0.86
enrSubnets - ssz BitVector 64 bits 389.00 ns/op 319.00 ns/op 1.22
enrSubnets - fastDeserialize 4 bits 144.00 ns/op 161.00 ns/op 0.89
enrSubnets - ssz BitVector 4 bits 407.00 ns/op 346.00 ns/op 1.18
prioritizePeers score -10:0 att 32-0.1 sync 2-0 267.02 us/op 257.01 us/op 1.04
prioritizePeers score 0:0 att 32-0.25 sync 2-0.25 307.88 us/op 283.34 us/op 1.09
prioritizePeers score 0:0 att 32-0.5 sync 2-0.5 447.90 us/op 392.94 us/op 1.14
prioritizePeers score 0:0 att 64-0.75 sync 4-0.75 788.89 us/op 726.48 us/op 1.09
prioritizePeers score 0:0 att 64-1 sync 4-1 908.90 us/op 864.40 us/op 1.05
array of 16000 items push then shift 1.7649 us/op 1.5769 us/op 1.12
LinkedList of 16000 items push then shift 8.0530 ns/op 7.1950 ns/op 1.12
array of 16000 items push then pop 88.617 ns/op 75.524 ns/op 1.17
LinkedList of 16000 items push then pop 7.8040 ns/op 7.0090 ns/op 1.11
array of 24000 items push then shift 2.7169 us/op 2.3292 us/op 1.17
LinkedList of 24000 items push then shift 8.8410 ns/op 7.3420 ns/op 1.20
array of 24000 items push then pop 119.34 ns/op 106.20 ns/op 1.12
LinkedList of 24000 items push then pop 7.8590 ns/op 7.0390 ns/op 1.12
intersect bitArray bitLen 8 6.3320 ns/op 5.5850 ns/op 1.13
intersect array and set length 8 36.999 ns/op 32.652 ns/op 1.13
intersect bitArray bitLen 128 31.348 ns/op 27.999 ns/op 1.12
intersect array and set length 128 605.80 ns/op 541.27 ns/op 1.12
bitArray.getTrueBitIndexes() bitLen 128 1.1200 us/op 1.1800 us/op 0.95
bitArray.getTrueBitIndexes() bitLen 248 1.9570 us/op 1.8780 us/op 1.04
bitArray.getTrueBitIndexes() bitLen 512 3.9590 us/op 3.6540 us/op 1.08
Full columns - reconstruct all 6 blobs 377.43 us/op 273.13 us/op 1.38
Full columns - reconstruct half of the blobs out of 6 140.02 us/op 140.47 us/op 1.00
Full columns - reconstruct single blob out of 6 33.345 us/op 32.202 us/op 1.04
Half columns - reconstruct all 6 blobs 291.86 ms/op 258.65 ms/op 1.13
Half columns - reconstruct half of the blobs out of 6 142.36 ms/op 131.45 ms/op 1.08
Half columns - reconstruct single blob out of 6 54.268 ms/op 48.701 ms/op 1.11
Full columns - reconstruct all 10 blobs 304.60 us/op 390.78 us/op 0.78
Full columns - reconstruct half of the blobs out of 10 146.37 us/op 157.93 us/op 0.93
Full columns - reconstruct single blob out of 10 32.851 us/op 30.513 us/op 1.08
Half columns - reconstruct all 10 blobs 486.93 ms/op 433.84 ms/op 1.12
Half columns - reconstruct half of the blobs out of 10 242.56 ms/op 222.29 ms/op 1.09
Half columns - reconstruct single blob out of 10 53.054 ms/op 48.189 ms/op 1.10
Full columns - reconstruct all 20 blobs 769.49 us/op 772.14 us/op 1.00
Full columns - reconstruct half of the blobs out of 20 261.29 us/op 315.09 us/op 0.83
Full columns - reconstruct single blob out of 20 32.934 us/op 31.405 us/op 1.05
Half columns - reconstruct all 20 blobs 946.05 ms/op 859.20 ms/op 1.10
Half columns - reconstruct half of the blobs out of 20 482.88 ms/op 433.03 ms/op 1.12
Half columns - reconstruct single blob out of 20 55.135 ms/op 47.880 ms/op 1.15
Set add up to 64 items then delete first 2.3014 us/op 1.9697 us/op 1.17
OrderedSet add up to 64 items then delete first 3.4094 us/op 2.8740 us/op 1.19
Set add up to 64 items then delete last 2.5802 us/op 2.1716 us/op 1.19
OrderedSet add up to 64 items then delete last 3.3258 us/op 3.1096 us/op 1.07
Set add up to 64 items then delete middle 2.3327 us/op 2.1756 us/op 1.07
OrderedSet add up to 64 items then delete middle 4.9615 us/op 4.7061 us/op 1.05
Set add up to 128 items then delete first 4.8586 us/op 4.6307 us/op 1.05
OrderedSet add up to 128 items then delete first 7.1972 us/op 7.0925 us/op 1.01
Set add up to 128 items then delete last 4.7700 us/op 4.4261 us/op 1.08
OrderedSet add up to 128 items then delete last 7.0929 us/op 6.4433 us/op 1.10
Set add up to 128 items then delete middle 4.8163 us/op 4.4421 us/op 1.08
OrderedSet add up to 128 items then delete middle 14.456 us/op 13.010 us/op 1.11
Set add up to 256 items then delete first 9.9171 us/op 9.9543 us/op 1.00
OrderedSet add up to 256 items then delete first 16.002 us/op 14.830 us/op 1.08
Set add up to 256 items then delete last 9.7051 us/op 9.3177 us/op 1.04
OrderedSet add up to 256 items then delete last 14.364 us/op 13.776 us/op 1.04
Set add up to 256 items then delete middle 9.3530 us/op 9.2519 us/op 1.01
OrderedSet add up to 256 items then delete middle 43.623 us/op 40.143 us/op 1.09
pass gossip attestations to forkchoice per slot 2.7417 ms/op 2.4561 ms/op 1.12
forkChoice updateHead vc 100000 bc 64 eq 0 548.21 us/op 484.70 us/op 1.13
forkChoice updateHead vc 600000 bc 64 eq 0 3.3059 ms/op 2.8920 ms/op 1.14
forkChoice updateHead vc 1000000 bc 64 eq 0 5.2540 ms/op 4.8234 ms/op 1.09
forkChoice updateHead vc 600000 bc 320 eq 0 3.3381 ms/op 2.9115 ms/op 1.15
forkChoice updateHead vc 600000 bc 1200 eq 0 3.4023 ms/op 2.9414 ms/op 1.16
forkChoice updateHead vc 600000 bc 7200 eq 0 3.7499 ms/op 3.1794 ms/op 1.18
forkChoice updateHead vc 600000 bc 64 eq 1000 3.9072 ms/op 3.3137 ms/op 1.18
forkChoice updateHead vc 600000 bc 64 eq 10000 4.0584 ms/op 3.4471 ms/op 1.18
forkChoice updateHead vc 600000 bc 64 eq 300000 10.154 ms/op 9.0563 ms/op 1.12
computeDeltas 1400000 validators 0% inactive 16.250 ms/op 13.628 ms/op 1.19
computeDeltas 1400000 validators 10% inactive 15.220 ms/op 14.029 ms/op 1.08
computeDeltas 1400000 validators 20% inactive 14.069 ms/op 12.160 ms/op 1.16
computeDeltas 1400000 validators 50% inactive 11.248 ms/op 9.3157 ms/op 1.21
computeDeltas 2100000 validators 0% inactive 23.133 ms/op 20.502 ms/op 1.13
computeDeltas 2100000 validators 10% inactive 21.928 ms/op 19.197 ms/op 1.14
computeDeltas 2100000 validators 20% inactive 19.832 ms/op 18.044 ms/op 1.10
computeDeltas 2100000 validators 50% inactive 15.908 ms/op 14.137 ms/op 1.13
altair processAttestation - 250000 vs - 7PWei normalcase 2.0512 ms/op 1.8790 ms/op 1.09
altair processAttestation - 250000 vs - 7PWei worstcase 3.0579 ms/op 2.7465 ms/op 1.11
altair processAttestation - setStatus - 1/6 committees join 129.73 us/op 117.42 us/op 1.10
altair processAttestation - setStatus - 1/3 committees join 243.47 us/op 229.70 us/op 1.06
altair processAttestation - setStatus - 1/2 committees join 368.42 us/op 325.01 us/op 1.13
altair processAttestation - setStatus - 2/3 committees join 467.55 us/op 414.30 us/op 1.13
altair processAttestation - setStatus - 4/5 committees join 642.85 us/op 575.87 us/op 1.12
altair processAttestation - setStatus - 100% committees join 742.84 us/op 680.85 us/op 1.09
altair processBlock - 250000 vs - 7PWei normalcase 4.2892 ms/op 3.7499 ms/op 1.14
altair processBlock - 250000 vs - 7PWei normalcase hashState 19.790 ms/op 21.445 ms/op 0.92
altair processBlock - 250000 vs - 7PWei worstcase 25.599 ms/op 26.222 ms/op 0.98
altair processBlock - 250000 vs - 7PWei worstcase hashState 60.619 ms/op 67.071 ms/op 0.90
phase0 processBlock - 250000 vs - 7PWei normalcase 1.8260 ms/op 1.5753 ms/op 1.16
phase0 processBlock - 250000 vs - 7PWei worstcase 23.798 ms/op 22.026 ms/op 1.08
altair processEth1Data - 250000 vs - 7PWei normalcase 441.34 us/op 366.35 us/op 1.20
getExpectedWithdrawals 250000 eb:1,eth1:1,we:0,wn:0,smpl:16 6.0020 us/op 7.8290 us/op 0.77
getExpectedWithdrawals 250000 eb:0.95,eth1:0.1,we:0.05,wn:0,smpl:220 63.093 us/op 56.641 us/op 1.11
getExpectedWithdrawals 250000 eb:0.95,eth1:0.3,we:0.05,wn:0,smpl:43 9.9460 us/op 9.8620 us/op 1.01
getExpectedWithdrawals 250000 eb:0.95,eth1:0.7,we:0.05,wn:0,smpl:19 7.8140 us/op 10.248 us/op 0.76
getExpectedWithdrawals 250000 eb:0.1,eth1:0.1,we:0,wn:0,smpl:1021 242.72 us/op 245.42 us/op 0.99
getExpectedWithdrawals 250000 eb:0.03,eth1:0.03,we:0,wn:0,smpl:11778 1.9279 ms/op 1.7145 ms/op 1.12
getExpectedWithdrawals 250000 eb:0.01,eth1:0.01,we:0,wn:0,smpl:16384 2.4045 ms/op 2.3081 ms/op 1.04
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,smpl:16384 2.4225 ms/op 2.3494 ms/op 1.03
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,nocache,smpl:16384 4.7242 ms/op 3.9429 ms/op 1.20
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,smpl:16384 2.4546 ms/op 2.3584 ms/op 1.04
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,nocache,smpl:16384 5.7880 ms/op 3.9418 ms/op 1.47
Tree 40 250000 create 393.64 ms/op 373.98 ms/op 1.05
Tree 40 250000 get(125000) 128.91 ns/op 125.56 ns/op 1.03
Tree 40 250000 set(125000) 1.3474 us/op 1.2282 us/op 1.10
Tree 40 250000 toArray() 16.074 ms/op 15.417 ms/op 1.04
Tree 40 250000 iterate all - toArray() + loop 13.791 ms/op 15.131 ms/op 0.91
Tree 40 250000 iterate all - get(i) 46.790 ms/op 45.655 ms/op 1.02
Array 250000 create 2.5993 ms/op 2.4962 ms/op 1.04
Array 250000 clone - spread 848.79 us/op 809.91 us/op 1.05
Array 250000 get(125000) 0.35200 ns/op 0.36100 ns/op 0.98
Array 250000 set(125000) 0.45100 ns/op 0.35600 ns/op 1.27
Array 250000 iterate all - loop 62.122 us/op 58.644 us/op 1.06
phase0 afterProcessEpoch - 250000 vs - 7PWei 42.930 ms/op 41.332 ms/op 1.04
Array.fill - length 1000000 3.8215 ms/op 2.8404 ms/op 1.35
Array push - length 1000000 12.049 ms/op 10.313 ms/op 1.17
Array.get 0.23061 ns/op 0.21772 ns/op 1.06
Uint8Array.get 0.22827 ns/op 0.21929 ns/op 1.04
phase0 beforeProcessEpoch - 250000 vs - 7PWei 15.744 ms/op 13.698 ms/op 1.15
altair processEpoch - mainnet_e81889 254.10 ms/op 226.69 ms/op 1.12
mainnet_e81889 - altair beforeProcessEpoch 17.597 ms/op 15.732 ms/op 1.12
mainnet_e81889 - altair processJustificationAndFinalization 5.7010 us/op 5.4270 us/op 1.05
mainnet_e81889 - altair processInactivityUpdates 3.9611 ms/op 3.6389 ms/op 1.09
mainnet_e81889 - altair processRewardsAndPenalties 19.753 ms/op 19.913 ms/op 0.99
mainnet_e81889 - altair processRegistryUpdates 650.00 ns/op 632.00 ns/op 1.03
mainnet_e81889 - altair processSlashings 218.00 ns/op 164.00 ns/op 1.33
mainnet_e81889 - altair processEth1DataReset 197.00 ns/op 158.00 ns/op 1.25
mainnet_e81889 - altair processEffectiveBalanceUpdates 2.1217 ms/op 1.8079 ms/op 1.17
mainnet_e81889 - altair processSlashingsReset 829.00 ns/op 810.00 ns/op 1.02
mainnet_e81889 - altair processRandaoMixesReset 1.1430 us/op 1.0120 us/op 1.13
mainnet_e81889 - altair processHistoricalRootsUpdate 196.00 ns/op 158.00 ns/op 1.24
mainnet_e81889 - altair processParticipationFlagUpdates 556.00 ns/op 507.00 ns/op 1.10
mainnet_e81889 - altair processSyncCommitteeUpdates 134.00 ns/op 130.00 ns/op 1.03
mainnet_e81889 - altair afterProcessEpoch 44.370 ms/op 41.817 ms/op 1.06
capella processEpoch - mainnet_e217614 893.35 ms/op 800.14 ms/op 1.12
mainnet_e217614 - capella beforeProcessEpoch 77.382 ms/op 55.620 ms/op 1.39
mainnet_e217614 - capella processJustificationAndFinalization 6.1830 us/op 5.2450 us/op 1.18
mainnet_e217614 - capella processInactivityUpdates 20.477 ms/op 15.147 ms/op 1.35
mainnet_e217614 - capella processRewardsAndPenalties 105.70 ms/op 99.233 ms/op 1.07
mainnet_e217614 - capella processRegistryUpdates 6.1610 us/op 5.8310 us/op 1.06
mainnet_e217614 - capella processSlashings 226.00 ns/op 172.00 ns/op 1.31
mainnet_e217614 - capella processEth1DataReset 182.00 ns/op 160.00 ns/op 1.14
mainnet_e217614 - capella processEffectiveBalanceUpdates 19.137 ms/op 9.5744 ms/op 2.00
mainnet_e217614 - capella processSlashingsReset 837.00 ns/op 764.00 ns/op 1.10
mainnet_e217614 - capella processRandaoMixesReset 1.3280 us/op 1.3720 us/op 0.97
mainnet_e217614 - capella processHistoricalRootsUpdate 223.00 ns/op 238.00 ns/op 0.94
mainnet_e217614 - capella processParticipationFlagUpdates 682.00 ns/op 688.00 ns/op 0.99
mainnet_e217614 - capella afterProcessEpoch 117.64 ms/op 113.67 ms/op 1.03
phase0 processEpoch - mainnet_e58758 257.50 ms/op 255.06 ms/op 1.01
mainnet_e58758 - phase0 beforeProcessEpoch 48.479 ms/op 53.744 ms/op 0.90
mainnet_e58758 - phase0 processJustificationAndFinalization 5.5120 us/op 5.8770 us/op 0.94
mainnet_e58758 - phase0 processRewardsAndPenalties 17.989 ms/op 17.222 ms/op 1.04
mainnet_e58758 - phase0 processRegistryUpdates 2.9230 us/op 2.7330 us/op 1.07
mainnet_e58758 - phase0 processSlashings 227.00 ns/op 194.00 ns/op 1.17
mainnet_e58758 - phase0 processEth1DataReset 175.00 ns/op 219.00 ns/op 0.80
mainnet_e58758 - phase0 processEffectiveBalanceUpdates 1.0021 ms/op 1.0867 ms/op 0.92
mainnet_e58758 - phase0 processSlashingsReset 935.00 ns/op 877.00 ns/op 1.07
mainnet_e58758 - phase0 processRandaoMixesReset 1.2370 us/op 1.1050 us/op 1.12
mainnet_e58758 - phase0 processHistoricalRootsUpdate 189.00 ns/op 220.00 ns/op 0.86
mainnet_e58758 - phase0 processParticipationRecordUpdates 923.00 ns/op 897.00 ns/op 1.03
mainnet_e58758 - phase0 afterProcessEpoch 37.397 ms/op 35.812 ms/op 1.04
phase0 processEffectiveBalanceUpdates - 250000 normalcase 1.8801 ms/op 2.5886 ms/op 0.73
phase0 processEffectiveBalanceUpdates - 250000 worstcase 0.5 2.1818 ms/op 1.6162 ms/op 1.35
altair processInactivityUpdates - 250000 normalcase 13.624 ms/op 13.438 ms/op 1.01
altair processInactivityUpdates - 250000 worstcase 14.746 ms/op 16.183 ms/op 0.91
phase0 processRegistryUpdates - 250000 normalcase 4.5390 us/op 5.9520 us/op 0.76
phase0 processRegistryUpdates - 250000 badcase_full_deposits 364.70 us/op 329.40 us/op 1.11
phase0 processRegistryUpdates - 250000 worstcase 0.5 72.078 ms/op 64.873 ms/op 1.11
altair processRewardsAndPenalties - 250000 normalcase 21.741 ms/op 16.894 ms/op 1.29
altair processRewardsAndPenalties - 250000 worstcase 20.965 ms/op 16.014 ms/op 1.31
phase0 getAttestationDeltas - 250000 normalcase 6.9420 ms/op 5.5204 ms/op 1.26
phase0 getAttestationDeltas - 250000 worstcase 6.1178 ms/op 5.9099 ms/op 1.04
phase0 processSlashings - 250000 worstcase 120.71 us/op 105.15 us/op 1.15
altair processSyncCommitteeUpdates - 250000 11.642 ms/op 10.951 ms/op 1.06
BeaconState.hashTreeRoot - No change 202.00 ns/op 259.00 ns/op 0.78
BeaconState.hashTreeRoot - 1 full validator 78.248 us/op 81.431 us/op 0.96
BeaconState.hashTreeRoot - 32 full validator 829.23 us/op 1.1442 ms/op 0.72
BeaconState.hashTreeRoot - 512 full validator 7.7677 ms/op 8.2971 ms/op 0.94
BeaconState.hashTreeRoot - 1 validator.effectiveBalance 105.19 us/op 107.75 us/op 0.98
BeaconState.hashTreeRoot - 32 validator.effectiveBalance 1.5254 ms/op 1.5399 ms/op 0.99
BeaconState.hashTreeRoot - 512 validator.effectiveBalance 19.970 ms/op 18.230 ms/op 1.10
BeaconState.hashTreeRoot - 1 balances 94.652 us/op 69.706 us/op 1.36
BeaconState.hashTreeRoot - 32 balances 742.33 us/op 835.78 us/op 0.89
BeaconState.hashTreeRoot - 512 balances 7.7970 ms/op 6.1257 ms/op 1.27
BeaconState.hashTreeRoot - 250000 balances 127.34 ms/op 149.54 ms/op 0.85
aggregationBits - 2048 els - zipIndexesInBitList 21.530 us/op 20.607 us/op 1.04
regular array get 100000 times 26.580 us/op 24.400 us/op 1.09
wrappedArray get 100000 times 25.775 us/op 24.476 us/op 1.05
arrayWithProxy get 100000 times 18.381 ms/op 16.744 ms/op 1.10
ssz.Root.equals 24.973 ns/op 23.975 ns/op 1.04
byteArrayEquals 24.421 ns/op 23.286 ns/op 1.05
Buffer.compare 10.527 ns/op 10.019 ns/op 1.05
processSlot - 1 slots 10.025 us/op 9.9250 us/op 1.01
processSlot - 32 slots 2.0780 ms/op 2.5192 ms/op 0.82
getEffectiveBalanceIncrementsZeroInactive - 250000 vs - 7PWei 4.3014 ms/op 4.1825 ms/op 1.03
getCommitteeAssignments - req 1 vs - 250000 vc 1.9699 ms/op 1.9065 ms/op 1.03
getCommitteeAssignments - req 100 vs - 250000 vc 3.8362 ms/op 3.7068 ms/op 1.03
getCommitteeAssignments - req 1000 vs - 250000 vc 4.2608 ms/op 3.9556 ms/op 1.08
findModifiedValidators - 10000 modified validators 654.21 ms/op 617.87 ms/op 1.06
findModifiedValidators - 1000 modified validators 581.44 ms/op 599.46 ms/op 0.97
findModifiedValidators - 100 modified validators 325.68 ms/op 502.90 ms/op 0.65
findModifiedValidators - 10 modified validators 238.59 ms/op 372.36 ms/op 0.64
findModifiedValidators - 1 modified validators 200.73 ms/op 330.17 ms/op 0.61
findModifiedValidators - no difference 222.82 ms/op 360.73 ms/op 0.62
migrate state 1500000 validators, 3400 modified, 2000 new 1.0300 s/op 1.4814 s/op 0.70
RootCache.getBlockRootAtSlot - 250000 vs - 7PWei 6.7100 ns/op 7.3000 ns/op 0.92
state getBlockRootAtSlot - 250000 vs - 7PWei 519.16 ns/op 1.1800 us/op 0.44
computeProposerIndex 100000 validators 1.6034 ms/op 1.7455 ms/op 0.92
getNextSyncCommitteeIndices 1000 validators 120.81 ms/op 173.30 ms/op 0.70
getNextSyncCommitteeIndices 10000 validators 127.87 ms/op 177.22 ms/op 0.72
getNextSyncCommitteeIndices 100000 validators 132.04 ms/op 197.07 ms/op 0.67
computeProposers - vc 250000 716.76 us/op 649.60 us/op 1.10
computeEpochShuffling - vc 250000 42.676 ms/op 44.000 ms/op 0.97
getNextSyncCommittee - vc 250000 10.889 ms/op 10.899 ms/op 1.00
nodejs block root to RootHex using toHex 144.88 ns/op 151.15 ns/op 0.96
nodejs block root to RootHex using toRootHex 82.021 ns/op 108.69 ns/op 0.75
nodejs fromHex(blob) 178.67 us/op 241.32 us/op 0.74
nodejs fromHexInto(blob) 754.29 us/op 811.12 us/op 0.93
nodejs block root to RootHex using the deprecated toHexString 569.13 ns/op 555.89 ns/op 1.02
browser block root to RootHex using toHex 376.21 ns/op 359.65 ns/op 1.05
browser block root to RootHex using toRootHex 161.33 ns/op 164.36 ns/op 0.98
browser fromHex(blob) 1.0648 ms/op 1.0854 ms/op 0.98
browser fromHexInto(blob) 787.36 us/op 720.01 us/op 1.09
browser block root to RootHex using the deprecated toHexString 557.44 ns/op 539.11 ns/op 1.03

by benchmarkbot/action

@twoeths twoeths force-pushed the te/query_shuffling_cache branch from 0704cbd to ceaf192 Compare January 23, 2026 07:09
@twoeths twoeths force-pushed the te/query_shuffling_cache branch from 336e6b9 to 903153a Compare January 23, 2026 09:53
@twoeths
Copy link
Contributor Author

twoeths commented Jan 23, 2026

found a good fix to populate ShufflingCache after a forky condition test in fork_choice spec test 903153a
leaving for for later reference

@twoeths twoeths marked this pull request as ready for review January 23, 2026 13:09
@twoeths twoeths requested a review from a team as a code owner January 23, 2026 13:09
Copy link
Member

@nflaig nflaig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good overall, just a few nits

});

// in forky condition, make sure to populate ShufflingCache with regened state
// otherwise it's failed to get indexed attestations from shuffling cache later
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did this always happen or just in certain edge cases?

Suggested change
// otherwise it's failed to get indexed attestations from shuffling cache later
// otherwise it may fail to get indexed attestations from shuffling cache later

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it happens on super forky condition, specifically in fork_choice spec_tests
ShufflingCache is designed to store only 4 epochs, say epoch 4, 5, 6, 7, then it reorg to epoch 3, then we cannot use ShufflingCache to get indexed attestations of below call (because Shufflings of epoch 3 was removed)
to make it safe we can just process regened states right after getPreState call (there are 2 of them)
that was the scenario in #8743 (comment)

@nflaig
Copy link
Member

nflaig commented Jan 23, 2026

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the codebase to replace direct calls to EpochCache methods with calls to the ShufflingCache for retrieving shuffling data. This change is motivated by the migration to native state-transition, which eliminates the ability to query EpochCache methods directly. The changes involve modifying several files to use the ShufflingCache for methods like getIndexedAttestation(), getAttestingIndices(), getBeaconCommittee(), and getBeaconCommittees().

@twoeths twoeths merged commit 1067fed into unstable Jan 26, 2026
38 of 44 checks passed
@twoeths twoeths deleted the te/query_shuffling_cache branch January 26, 2026 03:55
@codecov
Copy link

codecov bot commented Jan 26, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 52.13%. Comparing base (3932675) to head (5f6af63).
⚠️ Report is 5 commits behind head on unstable.

Additional details and impacted files
@@             Coverage Diff              @@
##           unstable    #8743      +/-   ##
============================================
+ Coverage     52.11%   52.13%   +0.01%     
============================================
  Files           848      848              
  Lines         64252    64178      -74     
  Branches       4735     4730       -5     
============================================
- Hits          33488    33457      -31     
+ Misses        30695    30652      -43     
  Partials         69       69              
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Use ShufflingCache instead of EpochCache for shuffling data

3 participants