-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CHIA-1608] Use a cache eviction logic that matches our BlsCache usage pattern #758
base: main
Are you sure you want to change the base?
[CHIA-1608] Use a cache eviction logic that matches our BlsCache usage pattern #758
Conversation
74b8c95
to
537630f
Compare
Pull Request Test Coverage Report for Build 11387266782Details
💛 - Coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did you run the benchmark to compare against main
? I think that would be an interesting comparison.
Could you also make sure that it's OK to evict on a cache hit? i.e. that we never rely on two or more cache hits?
If there's a reorg we might get more cache hits, but we should probably not optimize for that case. If there are competing unfinished blocks is another example where we'll have multiple cache hits.
It might make sense to make the eviction explicit. When a block is fully validated and added to the chain it should be safe to evict its keys from the cache.
let mut c = self.cache.lock().expect("cache"); | ||
for hash in &hashes_to_remove { | ||
c.items.remove(hash); | ||
c.insertions_order.retain(|h| h != hash); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is pretty inefficient. If you remove 2000 keys from a cache with 50'000 entries, you're performing 2000 linear scans. Each one will visit, on average, 25000 entries. So, this is O(n * m) I think.
if you're going to do a linear scan, at least just run a single pass over the cache.
It's a bit scary to do even a single pass as this will slow down as the cache grows in size.
It would probably be good to add a benchmark to ensure this operation is fast, even with a large cache and high cache hit ratio.
aggregate_verify_gt(sig, iter) | ||
let is_valid = aggregate_verify_gt(sig, iter); | ||
if is_valid { | ||
// Evict cache hit entries on successful validation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be good to explain why we want to do this. The fact that we're doing it is already clear from the source code
No description provided.