Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cargo-near build whole workspace with one docker instance #301

Open
frolvanya opened this issue Jan 31, 2025 · 14 comments
Open

cargo-near build whole workspace with one docker instance #301

frolvanya opened this issue Jan 31, 2025 · 14 comments
Labels
enhancement New feature or request

Comments

@frolvanya
Copy link

frolvanya commented Jan 31, 2025

Add a support to build reproducible-wasm files for all workspace at once. Right now if we have multiple contracts sharing the same dependecies we need to rebuild all of them multiple times from scratch since every build uses clean docker state. It would be great to be able to run cargo near build reproducible-wasm at the root of workspace and if project has a reproducible build info in toml file it would be automatically included to the build process and use only one docker instance to build all contracts at once

Here's an example where it can drastically improve build time: https://github.com/Near-One/omni-bridge/actions/runs/13065597964/job/36457334849

@frolvanya frolvanya added the enhancement New feature or request label Jan 31, 2025
@dj8yfo
Copy link
Collaborator

dj8yfo commented Feb 1, 2025

@frolvanya would it be simpler to run builds in parallel in ci with matrix with multiple executors ?

@frolvanya
Copy link
Author

@dj8yfo I tried running it parallel using -j option in make command and it reduced time by 2 mins, but it's not a solution. We still waste ton of resources by recompiling the same packages all over again in multiple dockers and it doesn't matter if we're doing it in parallel or not. Also, it'd improve UX, we're using makefile to compile all contracts, I'd like to opt out of this workflow by calling a simple cargo near build in the root of the workspace (just like I do with regular cargo build when I want to compile all projects all together)

@frolvanya
Copy link
Author

Are there any technical challenges on implementing the same workflow as cargo build with workspaces?

@dj8yfo
Copy link
Collaborator

dj8yfo commented Feb 1, 2025

@frolvanya sure there are. Right now it sounds more like a random collection of words + phantasy, than a technical way to solve it. There might be a way to do it while preserving reproducubility, and there might not.
I'd rather not invest time into it right now, making it more complex and adding a trillion of edge cases to juggle around. It's simple to ensure reproducibility right now because it's always the same clean state, there's really not much that could go wrong. Adding a shared docker volume for it might work well, might not, might work well apparently at first, than raise more problems than it solved.
Why do you need reproducible build for local dev at all? I thought it was only a ci thing for dev/staging/production deployments. Is it?

@frolvanya
Copy link
Author

frolvanya commented Feb 1, 2025

Why do you need reproducible build for local dev at all?

I think there's a misunderstanding, where did I say that I need a reproducible build for local development? We're using it in a workflow to update releases page where we store our contracts

@dj8yfo
Copy link
Collaborator

dj8yfo commented Feb 1, 2025

using -j option in make command and it reduced time by 2 mins, but it's not a solution

I suggested not running in parallel on one vm, but splitting it in ci to multiple vm-s with matrix config. Like running different make commands for each parallel job, not running make in parallel in a single job

@frolvanya
Copy link
Author

Oh, yes, this will work, but it looks more like a workaround instead of a solution to me

@frolvanya
Copy link
Author

frolvanya commented Feb 1, 2025

Right now it sounds more like a random collection of words + phantasy, than a technical way to solve it.

Well, I just proposed a feature (@frol suggested me to do this), it's up to cargo-near team to decide if this is possible and how to do it. I'm not working in cargo-near team, so I can't give you a step-by-step instruction how to do it

@dj8yfo
Copy link
Collaborator

dj8yfo commented Feb 1, 2025

@frolvanya you could throw in an rfc on how nep-330 1.3.0 would look like, or how it alternatively could be fitted to fully comply with with 1.2.0

@dj8yfo
Copy link
Collaborator

dj8yfo commented Feb 1, 2025

so I can't give you a step-by-step instruction how to do it

k

@dj8yfo
Copy link
Collaborator

dj8yfo commented Feb 1, 2025

We still waste ton of resources by recompiling

Without a ready technical solution on how to solve it, it's just a speculation that it's a waste of ton of resources

@frol
Copy link
Contributor

frol commented Feb 16, 2025

Without a ready technical solution on how to solve it, it's just a speculation that it's a waste of ton of resources

@dj8yfo It is not a speculation, it is obvious that incremental compilation is not leveraged if you build contracts individually on the same machine using fresh Docker container for each. Having 8+ contracts in the project it becomes really noticeable.

I can imagine cargo-near taking care of workspaces support and compile each individual package inside the workspace as if it is built separately and thus no changes to the Contract Metadata is needed. However, the challenge is to think through the potential side-effects of one contract compilation on the other, and I think in the general case we cannot guarantee that building the first contract won't affect the byte-to-byte matching result of second contract compilation.

Nevertheless, Incremental compilation with sccache support that only takes the incremental compilation artifacts would be nice option to explore that would not only help to address this case, but also speed up CI/CD for single-contract projects if the cache layer is persisted. I believe there is little to no risk in using incremental compilation caching and affecting reproducibility, but the productivity gains will be noticeable.

@dj8yfo
Copy link
Collaborator

dj8yfo commented Feb 17, 2025

@frol, how does sccache work when doing a push to , say , a main branch? Does it take artifacts from build of previous push to the same main branch?
It's worth checking/confirming it takes them from a previous build of push to the same branch.
Reproducibly building in prs with sccache wouldn't make much sense, as github creates merge branches anyway https://github.com/near/cargo-near/blob/main/cargo-near/src/commands/new/new-project-template/.github/workflows/deploy-staging.yml#L44 .

For this build in near-sdk here in master https://github.com/near/near-sdk-rs/actions/runs/13370628945/job/37338126092 i see No cache found, but it looks just as a lack of proper configuration and possible to do (looking at doc of Swatinem/rust-cache )

Image

Anyway, i'll close this issue then, as the discussion has pivoted out of doing cargo near build --workspace (non-reproducible / reproducible) in the issue's title => to supplying external cache to reproducible-build with cargo-near.

I had some ideas how to do cargo near build --workspace , but now i don't see how this was supposed to work with near_workspaces::compile_project in a more complete picture.
Though, yes, it can still be done, just without eliminating the build scripts.
[The build scripts would have to stay] for near_workspaces::compile_project to continue to work,
OR [just include_bytes! used instead + cargo test have a dependency on cargo near build --workspace step ].

@dj8yfo
Copy link
Collaborator

dj8yfo commented Mar 6, 2025

After upgrading near/near-sdk-rs#1323 rust-cache starting picking up cache for branch pushes too (not only multiple iterations inside of a single pr), so mentioning that aspect probably was not important

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants