You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With a remote binary cache, the current implementations can take a notable period of time for each individual package operation. As an example, in my experience, the AWS S3 backend typically requires approx. 2-5 seconds per package. A solid chunk of this time is networking latency and other delays that can be effectively reduced by performing multiple operations in parallel (tasks/threads).
This would be a fairly simple transformation (or at least as simple as anything involving threading gets) to using parallel_for_each or parallel_transform in place of the current for loops for the caching tools/operations which support parallelisation
The text was updated successfully, but these errors were encountered:
This is an automated message. Per our repo policy, stale issues get closed if there has been no activity in the past 180 days. The issue will be automatically closed in 14 days. If you wish to keep this issue open, please add a new comment.
With a remote binary cache, the current implementations can take a notable period of time for each individual package operation. As an example, in my experience, the AWS S3 backend typically requires approx. 2-5 seconds per package. A solid chunk of this time is networking latency and other delays that can be effectively reduced by performing multiple operations in parallel (tasks/threads).
This would be a fairly simple transformation (or at least as simple as anything involving threading gets) to using
parallel_for_each
orparallel_transform
in place of the current for loops for the caching tools/operations which support parallelisationThe text was updated successfully, but these errors were encountered: