Skip to content

Conversation

@lslusarczyk
Copy link
Contributor

@lslusarczyk lslusarczyk commented Oct 17, 2025

Change in CI

Add exit-on-failure to benchmarks CI run

There was exit-on-failure option available in benchmark scripts, but it was not used by CI that checks correctness. Now it is so we catch more errors in CI.

Speed up benchmarks CI

CI was running as many interations as in normal performance measurement, which was pointless for just checking scripts correctness. Now iterations count is limited to minimum which changed benchmarks CI time from 1 hour to 15 minutes.

Change in Benchmark scripts

disabled one failing benchmark, fixed another

Once CI exits on any failure it revealed two errors in benchmarking. One was easy to fix. Another will be done in the future and now its benchmarks are disabled

removed needs_rebuild argument

Defining whether project use install dir is in one place - GitProject constructor, not in configure and needs_rebuild arguments which was error prone. No force rebuild path any more.

"--no-rebuild" option to devops/scripts/benchmarks/main.py is removed and the behavior of the script is the same as previously was with that option. This simplifies the code

Previously --no-rebuild option caused compute-benchmarks, llama , etc... dependent project to be not rebuild if already compiled in proper version (commit hash hardcoded in benchmarks scripts matches).

Now this behavior is default and the only one.

if suite is enabled by options (--sycl, --ur, etc..), compiled and its tag (e.g. compute-benchmarks tag) matches benchmarks's hardcoded one - it is not rebuild, otherwise it is rebuild.

In CI simply workdir is cleaned before running main.py, so previous behavior is preserved.

During manual work, if one wants to force a rebuild a dependent repo he/she needs to remove its binary dir from workdir.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR simplifies the benchmark rebuild logic by removing the --no-rebuild option and consolidating the decision of whether to use an install directory into the GitProject constructor. Previously, rebuild behavior was controlled through multiple mechanisms (force_rebuild, check_install, check_build parameters), which was error-prone. Now, the system automatically skips rebuilds when a project's compiled artifacts exist and its version matches the expected commit hash.

Key changes:

  • Removed --no-rebuild CLI option and options.rebuild flag
  • Replaced force_rebuild parameter with use_installdir in GitProject constructor
  • Simplified needs_rebuild() to check either install or build directory based on use_installdir
  • Removed install_prefix parameter from configure() method
  • CI now cleans workdir before running to ensure clean builds

Reviewed Changes

Copilot reviewed 12 out of 12 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
devops/scripts/benchmarks/git_project.py Refactored GitProject to use use_installdir flag, simplified needs_rebuild() and configure() methods
devops/scripts/benchmarks/options.py Removed rebuild option field
devops/scripts/benchmarks/main.py Removed --no-rebuild argument and related assignment
devops/scripts/benchmarks/utils/compute_runtime.py Updated calls to remove check_install parameter and install_prefix argument; added use_installdir=False for compute-runtime
devops/scripts/benchmarks/benches/*.py Updated benchmark classes to use use_installdir=False and removed install_prefix arguments
devops/scripts/benchmarks/benches/velocity.py Removed manual build directory cleanup logic
devops/scripts/benchmarks/README.md Removed documentation about --no-rebuild option
devops/actions/run-tests/benchmark/action.yml Added workdir cleanup before running benchmarks

@lslusarczyk
Copy link
Contributor Author

@lslusarczyk lslusarczyk requested a review from PatKamin October 17, 2025 13:45
@PatKamin
Copy link
Contributor

Benchmark changes tested in: https://github.com/intel/llvm/actions/runs/18593660864/job/53014846090

Please, run benchmarks with a broader set of suites, not only Compute Benchmarks: preset: Normal

@lslusarczyk
Copy link
Contributor Author

Benchmark changes tested in: https://github.com/intel/llvm/actions/runs/18593660864/job/53014846090

Please, run benchmarks with a broader set of suites, not only Compute Benchmarks: preset: Normal

https://github.com/intel/llvm/actions/runs/18595652579

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Copilot reviewed 15 out of 15 changed files in this pull request and generated 2 comments.

@lslusarczyk
Copy link
Contributor Author

@intel/dpcpp-devops-reviewers , could you please take a look at CI changes and approve if all OK?
This is

  • adding exit_on_failure option to benchmarks,
  • cleaning benchmarkls workdir
  • disabling failed flamegraphs in benchmarks

@lslusarczyk
Copy link
Contributor Author

CI failures are not related to my changes (python benchmark scripts, new option to benchmarks CI).

@lslusarczyk
Copy link
Contributor Author

@intel/llvm-gatekeepers , could you please merge? CI failures are unrelated to python scripts changes

@sarnex
Copy link
Contributor

sarnex commented Oct 24, 2025

Will rerun CI

@sarnex sarnex merged commit b2c82ba into intel:sycl Oct 24, 2025
57 of 71 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants