-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf(query-core): Improve performance of mutationCache #8450
Conversation
The mutationCache implementation models all mutations as scoped even when the scope is implicit. With implicit scopes there can only ever be one mutation per scope so the overhead of tracking the scope is unecessary. Additionally because scopes need a unique value the current implementation bootstraps this value using a random source (Date.now()). While this is practically likely to be fine it is not impossible that there is a scope collision with mutation hydration. Modeling the internal state of the cache without deriving virtual scopes simplifies the implementation in memory and code and avoids the possibily of scope collision by design. This should provide a modest perf bump for mutations that do not use scopes and be neutral for mutations that do use scopes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this PR 🙏 . I think it adds a bit of complexity, keeping scoped & unscoped mutations separately. It also means getAll()
will now return them in a different order, which should not matter, but maybe we can avoid this by keeping the mutations in the same Map, but have the key
be string | undefined
?
This would put all the unscoped mutations in the undefined
part, which we would then have to consider in runNext
and canRun
.
Alternatively - what do you think of this PR that just switches to using the performance
API?
https://github.com/TanStack/query/pull/8315/files
- this.#mutationId = Date.now()
+ this.#mutationId = Math.round(performance.timeOrigin + performance.now())
would this also fix the next15 dynamicIO issue?
I stacked another change on this one here: #8451 which technically still would return the mutations in a different order for getAll but would respect the order of insertion so later mutations were after earlier ones. While the dynamicIO issue motivated the original investigation I think the extra internal complexity is warranted for the performance benefits it should provide. I’m not sure how many mutations a typical all would have but in the proposed implementation you save one array allocation per unscoped mutation. It also eliminates the flat call in getAll which has to flatten this unnecessary array from all unscoped mutations. It may be that in a normal program this overhead is negligible but since you can’t control the environment this library runs in my personal opinion is the private complexity required to run a lean as possible is worth it. using performance.now will work around the Next.js another option is to use zero on the server and Date.now in the browser. I don’t think there is any hydration in the server so the constraints there are different and dynamicIO isn’t concerned with the browser where hydration happens. But that’s only if I haven’t convinced you the perf is worth the complexity added in this and the follow up PR |
I think #8451 looks good 👍 .
you can technically serialize mutations on the server and send them to the client as well, so I’d rather not do that. |
Hey friends. I was looking at the implementation of mutationCache while investigating an issue in Next.js related to our new
dynamicIO
experimental flag. With this flag on, during prerendering, sync IO like reading the system clock will bail out of prerendering unless you explicitly cache it. Turns out the mutationCache usesDate.now()
to seed an ID to avoid collisions of scopes when hydrating mutations which is what initiated the original issue in Next.jsthe mutationCache implementation isn't wrong but since it can be made more efficient by not using a derived scope I put together this PR to implement a perf improvement that also happens to avoid deopting out of prerendering in this case.
Implementation Notes
The mutationCache implementation models all mutations as scoped even when the scope is implicit. With implicit scopes there can only ever be one mutation per scope so the overhead of tracking the scope is unnecessary. Additionally because scopes need a unique value the current implementation bootstraps this value using a random source (Date.now()). While this is practically likely to be fine it is not impossible that there is a scope collision with mutation hydration. Modeling the internal state of the cache without deriving virtual scopes simplifies the implementation in memory and code and avoids the possibility of scope collision by design. This should provide a modest perf bump for mutations that do not use scopes and be neutral for mutations that do use scopes.