-
Notifications
You must be signed in to change notification settings - Fork 175
Simple LRU garbage collection for interned values #839
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for salsa-rs canceled.
|
CodSpeed Performance ReportMerging #839 will improve performances by 4.01%Comparing Summary
Benchmarks breakdown
|
b500f7e
to
b54e365
Compare
Interesting. I suspect this is new but I'm investigating a similar issue related to interned in cycles |
That makes me wonder if your PR might help to come up with simpler repros for the fixpoint issues that we're seeing. I should give this a try tomorrow. On the perf side. You could try using the |
I quickly tried the branch in Red Knot but I also got new panics that we didn't see before. That makes me suspect that some revision tracking might be off in this PR? What I like to do to debug those issues is to compare the tracing output and see where things go different |
I think the issue is unrelated to durabilities but about interned query arguments. If a query takes an interned struct as an argument then an older memo may be incorrectly verified after the slot has been reused, but the IDs still match. I think we have to check that the memo and first_interned_at revisions match to avoid this, but I'm not exactly sure where that would be. |
I think I understand. What you're saying is that I think will need something similar to what we have in tracked structs where salsa informs other ingredients that the memo was collected so that they can free resources as well if necessary: The second part in tracked struct that protects against this is that we separately track |
073a988
to
7f89c28
Compare
3ac564d
to
a232751
Compare
I updated the PR to add read dependencies and clear memos on reuse (I believe we will need to defer reclamation of the memos), but there are a still some ty tests failing that I'm unsure about. |
39c219b
to
bcb023c
Compare
e366c0e
to
5e693d9
Compare
Hmm I'm not sure we can remove the read dependency on creation with generational IDs, we still need |
No, I don't think we can remove it from the writing query. We need the dependency to ensure the interned value gets re-created when it was claimed previously |
8fc9557
to
54b6bba
Compare
I updated the PR description with the new changes, this should be ready for review. |
I believe you mean high durability here? It would be helpful to add some details to why
I believe we originally discussed a GC where interned values would move from green, to yellow, to red generations. I assume what you implemented now isn't that. Can you give some context to why?
Is this after removing the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a few questions, but this overall looks great!
src/interned.rs
Outdated
if let Some((_, stamp)) = zalsa_local.active_query() { | ||
value_shared.durability = std::cmp::max(value_shared.durability, stamp.durability); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that we can't LRU interned values with a durability other than LOW
. Do you see a way for how we could avoid doing the entire LRU caching for MEDIUM and HIGH durability interned values?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is possible, but I was imagining that when we add a way to sweep higher durability values we would still use the LRU to only look at the relevant stale slots. That said traversing a linked list is not very efficient, so maybe the LRU should only store low durability values.
if value_shared.durability != Durability::LOW { | ||
cursor.move_prev(); | ||
continue; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we avoid recording medium or high durability values in the LRU? Or is there no risk that, after a while, all high durability values are at the end of the LRU, requiring us to skip many values here
src/interned.rs
Outdated
let value_shared = unsafe { &mut *value.shared.get() }; | ||
|
||
// The slot was reused. | ||
if value_shared.first_interned_at > revision { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still need this, now that we have generational IDs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we still need it to record the dependency on creation, which uses first_interned_at
. Do you mean this if statement specifically?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're right, we don't actually need this anymore.
The reason is that it doesn't really make sense to have a specific capacity for the LRU. Instead, we care more about reusing values that are unlikely to be accessed again. Bounding the LRU means we might end up evicting values that were read in a recent revision. It also makes it a lot more expensive to grow the LRU and shuffle around the values based on their color, especially for a cold run. The LRU implemented in this PR does have some idea of coloring based on revision, i.e. values that have already been read in the current revision are not moved in the LRU, which as I understand is the main benefit of the coloring idea.
Yes, sorry. I believe issue was that we need to make sure that |
I removed the |
Basic implementation of #597.
The garbage collection strategy used here is very simple.
&mut db
. This could be supported with the above.This is by no means a perfect strategy, but we can always tune it as needed once the infrastructure is in place. We may also want to expose the
N
active revisions constant as a configuration option in the#[interned]
macro (it seems somewhat related to durability).The LRU list is sharded by hash, very similar to dashmap. There seems to be no performance difference from the previous implementation in multi-threaded
ty
benchmarks (though there was a significant regression without sharding). There is also no significant performance difference in single-threaded benchmarks thanks to #864, which lets us avoid additional read dependencies.My main concern is that this change adds 3 word-sized fields to interned values: an
Id
and two pointers for the intrusive LRU list. This seems unavoidable to implement garbage collection, but still something to note.