You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was wondering if the Rush team would be interested in having a rush set-cache command to be contributed, or whether I should pursue this as a custom script or plugin.
This command would populate the cache with whatever it found on disk for a particular command, but not actually do the work. This obviously places the full responsibility on the consumer to ensure the work has been done properly and the build artifacts for whatever command has been executed are in the proper state. They'd run the risk polluting the cache with invalid entries. It's very much a power-user sort of command.
Rationale
We're moving one of our particularly heavy CI builds to use BuildXL, but want to keep using Rush for everything else. The chief motivation is BuildXL's ability to distribute work across multiple agents for processing the build graph.
The issue is connecting the BuildXL + Rush caches. Even though we want to use BuildXL for the CI pipelines, we still want the benefit of having the CI populate the Rush cache to keep local development zippy. BuildXL has a totally different way of computing the cache, but ultimately, both build orchestrators calls exactly the same methods in our codebase for the commands. So the idea is that rather than do the work twice in two different pipelines, we'd have a post-step that runs after buildXL finishes a single package task (build, test etc), which would then check it was green, then populate the Rush cache with whatever was just built. The command outputs are already well-defined in rush-project.json files.
API-wise, it's still early days, but I was thinking something along these lines:
I'm not inclined to surface this as a Rush CLI command, but I do believe it would be a good idea to be able to perform cache fetch, cache unpack, cache pack, and cache write all as API calls within a Rush plugin, at which point if you want to have a custom Rush plugin that syncs the cache, you can do that.
In order to compute the set of cache keys, Rush needs to compute the operation graph, obtain the Git hashes for all relevant source files, and then compute the hashes, so the plugin would have to operate after operation hashes have been computed.
Summary
I was wondering if the Rush team would be interested in having a
rush set-cache
command to be contributed, or whether I should pursue this as a custom script or plugin.This command would populate the cache with whatever it found on disk for a particular command, but not actually do the work. This obviously places the full responsibility on the consumer to ensure the work has been done properly and the build artifacts for whatever command has been executed are in the proper state. They'd run the risk polluting the cache with invalid entries. It's very much a power-user sort of command.
Rationale
We're moving one of our particularly heavy CI builds to use BuildXL, but want to keep using Rush for everything else. The chief motivation is BuildXL's ability to distribute work across multiple agents for processing the build graph.
The issue is connecting the BuildXL + Rush caches. Even though we want to use BuildXL for the CI pipelines, we still want the benefit of having the CI populate the Rush cache to keep local development zippy. BuildXL has a totally different way of computing the cache, but ultimately, both build orchestrators calls exactly the same methods in our codebase for the commands. So the idea is that rather than do the work twice in two different pipelines, we'd have a post-step that runs after buildXL finishes a single package task (build, test etc), which would then check it was green, then populate the Rush cache with whatever was just built. The command outputs are already well-defined in rush-project.json files.
API-wise, it's still early days, but I was thinking something along these lines:
Thoughts + feedback would be appreciated.
The text was updated successfully, but these errors were encountered: