-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve performance of large updates #106
Comments
In terms of reproduction, I'd recommend setting up a JS project with a lot of large dependencies. Perhaps try something like this:
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
We've been seeing some slower updates recently. These are updates for some pretty substantial application file systems (approximately 1.8G), but we still would expect Dateilager (DL) to be able to update a filesystem of that size in under 30 seconds (if we could get that closer to 10 seconds, or even less, that would be amazing!). It looks kind of like this:
So what can we do in DL? From the above, this is the slow codepath that made up the slow span:
dateilager/pkg/api/fs.go
Lines 627 to 647 in 140e3dc
The first line that jumps out to me is:
Are there a lot of objects? I think so, but we should investigate that. If there are lots, is there any way to batch the DB queries we make, so there's just one big
SELECT
and a bulkINSERT
, instead of one per object.Given the above trace, I think that's what's making the
Update
RPC so slow.If there aren't lots of objects, we'll need to investigate this a bit more to understand why there isn't a lot.
The text was updated successfully, but these errors were encountered: