Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CHIA-1608] Use a cache eviction logic that matches our BlsCache usage pattern #758

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

AmineKhaldi
Copy link
Contributor

No description provided.

Copy link

Pull Request Test Coverage Report for Build 11387266782

Details

  • 43 of 43 (100.0%) changed or added relevant lines in 1 file are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage increased (+0.03%) to 83.768%

Totals Coverage Status
Change from base Build 11295205774: 0.03%
Covered Lines: 12835
Relevant Lines: 15322

💛 - Coveralls

@AmineKhaldi AmineKhaldi marked this pull request as ready for review October 17, 2024 17:31
@AmineKhaldi AmineKhaldi changed the title CHIA-1608 Use a cache eviction logic that matches our BlsCache usage pattern [CHIA-1608] Use a cache eviction logic that matches our BlsCache usage pattern Oct 21, 2024
Copy link
Contributor

@arvidn arvidn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did you run the benchmark to compare against main? I think that would be an interesting comparison.

Could you also make sure that it's OK to evict on a cache hit? i.e. that we never rely on two or more cache hits?

If there's a reorg we might get more cache hits, but we should probably not optimize for that case. If there are competing unfinished blocks is another example where we'll have multiple cache hits.

It might make sense to make the eviction explicit. When a block is fully validated and added to the chain it should be safe to evict its keys from the cache.

let mut c = self.cache.lock().expect("cache");
for hash in &hashes_to_remove {
c.items.remove(hash);
c.insertions_order.retain(|h| h != hash);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is pretty inefficient. If you remove 2000 keys from a cache with 50'000 entries, you're performing 2000 linear scans. Each one will visit, on average, 25000 entries. So, this is O(n * m) I think.

if you're going to do a linear scan, at least just run a single pass over the cache.
It's a bit scary to do even a single pass as this will slow down as the cache grows in size.

It would probably be good to add a benchmark to ensure this operation is fast, even with a large cache and high cache hit ratio.

aggregate_verify_gt(sig, iter)
let is_valid = aggregate_verify_gt(sig, iter);
if is_valid {
// Evict cache hit entries on successful validation.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be good to explain why we want to do this. The fact that we're doing it is already clear from the source code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants