-
Notifications
You must be signed in to change notification settings - Fork 503
Microbenchmarks, Dashboard and Jenkins
Contents of this page:
- Overview
- Current design of microbenchmark execution on Jenkins
- Desired capabilities
- Proposed design
Jenkins runs the suite of Terrier microbenchmarks on each pull request. The result of each benchmark is compared with the historical performance of the Terrier repository. (Historical performance data is created by running microbenchmarks in the terrier-nightly project. These are triggered by a "cron" job). If PR performance varies excessively from historical data, the PR will be marked as "failed".
- Each microbenchmark generates results as a JSON file* Jenkins runs the suite of tests on the Terrier repository, nightly, via a cron job.
- Jenkins provides a web API that allows access to th* Jenkins runs the suite of tests on the Terrier repository, nightly, via a cron job.e projects and their build results. This includes being able to access the set of files create by microbenchmarks.
- Jenkins runs the suite of tests on the Terrier repository, nightly, via a cron job.
- Jenkins runs the suite of tests on pull requests (including microbenchmarks), any time a PR is created or changed. The run benchmark uses the Jenkins web api to compare* Jenkins runs the suite of tests on the Terrier repository, nightly, via a cron job.s results of the current PR with historical data.
The deficiencies with the current approach are:
- The amount of history is limited. Jenkins deletes information about jobs older than a set age. This is unavoidable as job logs are large. This is fine for pass/fail determination, but it would be nice for the dashboard to have long term history, for viewing.* Jenkins runs the suite of tests on the Terrier repository, nightly, via a cron job.
- Reading the data and processing it, for pass/fail determination is complicated (hard to understand, hard to modify, not sufficiently flexible).
- Flexible and configuration pass/fail per benchmark. Allow selection of the amount of history to use, whether to include builds that failed (might have successfully run microbenchmarks, but failed some other check), selectable + and - thresholds. Also, disable pass / fail check entirely for a specific benchmark. All of this should be configurable via a UI (without having to change code or scripts).
- Keep more history.
- Be able to handle other metrics, e.g. latency measurements
Macro components:
- Jenkins
- Data collection from Jenkins. Per PR and for nightly builds.
- Database for storing the the Jenkins data (Postgres)
- Dashboard (web based) for i) viewing data and graphs ii) allowing configurable parameters to be adjusted
- PR pass/fail determination
Jenkins would continue to run as today.
- If data collection is via push, then scripts would be added to take data (after execution of a pull request job, or after a nightly job) and to push the data to the database. Push can easily push just the current job's data, but does require changes to the Jenkins pipelines (so may be harder to debug problems). It has not been determined if there is enough context to place the data from a push.
- If pull is desired, then a cron job would be set up scan Jenkins and pull any new data. Can be completely external to Jenkins, so easy to do. However, pull needs to ensure that data is inserted only once. Pulling, one does know the hierarchy, i.e. the proper context for retrieved data.
Database, with schemas for:
- PR results
- Nightly build results
- Configurable parameters for pass / fail determination
- optionally PR overall results
Dashboard can take data from the database and provide various presentations. Tables, performance graphs etc.
Pass / fail utility, would be created. This would read data from the database and make the determination. Data selection would be much easier, cleaner and more flexible (via SQL). Configurable parameters, also in the database, would allow administrators to easily make changes when exceptions are needed, or when policies need to change.
Carnegie Mellon Database Group Website