ucc-bench
is a command-line utility designed to benchmark and compare the performance of various quantum compilers, with a particular focus on the ucc
compiler. It allows users to define benchmark suites consisting of quantum circuits (provided as QASM files) and run them against a configurable set of compilers (e.g., UCC, Qiskit, Cirq, PyTket).
The suite measures key performance indicators such as compilation time and the number of multi-qubit gates in the compiled circuits. Optionally, it can simulate the circuits (both original and compiled versions) under idealized and noisy conditions (using a depolarizing noise model) to evaluate the impact of compilation on execution fidelity.
Results, including system metadata, runner information, compilation metrics, and simulation metrics, are saved in a structured JSON format for easy analysis and comparison across different runs or machines.
This repository houses both the code to run the benchmarks, the specification files for the official benchmarks, and the results for official benchmarks. However to be clear, this repo is a companion to the main ucc
repository. That repository is where all ongoing ucc
work occurs.
And here you can see progress over time, with new package versions labeled for each compiler:
At this time, ucc-bench
is not published as a python package as it is very specific to the ucc
project.
Instead, users interested should clone this repository, and setup an environment using uv
to run or develop via
$ uv sync
See the uv
docs for information on installing uv
. Note that this will skip installing optional dependency groups. At this time that is pyqpanda3
, which is not supported on macos intel chips. To install pyqpanda3
as part of setup, call uv sync --all-groups
.
Benchmarks are defined as a TOML file. The top-level benchmarks
directory contains
benchmark specifications. Today that includes compilation_benchmarks.toml
, layout_benchmarks.toml
, and simulation_benchmarks.toml
.
Benchmark suites are run using the ucc-bench
utility (which is an entry to ucc_bench.main:main
). To
see invocation options, you can run the command below
$ uv run ucc-bench -h
To run the benchmarks locally
$ uv run ucc-bench <path_to/specification.toml>
which by default will generate the results to the .local_results
directory and parallelize using the number of cores available on your machine. If you did not install the optional pyqpanda3
dependency mentioned above, this run will fail on benchmark specifications that include the pyqpanda3
compiler.
You can instead restrict a suite to only run a specific compiler and/or a specific benchmark circuit. This is also useful for debugging.
$ uv run ucc-bench <path_to/specification.toml> --only_compiler <compiler_id> --only_benchmark <benchmark_id>
By default, the results are stored as JSON files in path {out_dir}/{runner_name}/{suite_id}/{uid_date}/{uid}.json
.
Here, if not specified as a command line argument, uid
is randomly generated UUID and uid_date
is the current date.
When run as a GitHub action for the standard results, we expect this to be the Git hash of the and Git hash date of the corresponding commit.
To make a new compiler available for benchmarking:
- Run
uv add <package>
to add the corresponding package to the environment - Create a new file in
ucc_bench/compilers/
, e.g.,my_compiler.py
. - Implement a class that inherits from
ucc_bench.compilers.BaseCompiler
and implement the necesssary abstract methods, and register the compiler using the decorator:
from ..registry import register
from .base_compiler import BaseCompiler
# Import packages
# YourCircuitType = CircuitType
@register.compiler("my-compiler-id")
class MyCompiler(BaseCompiler[YourCircuitType]):
@classmethod
def version(cls) -> str:
# Return compiler version
pass
def compile(self, circuit: YourCircuitType) -> YourCircuitType:
# Implement the compilation logic
pass
def count_multi_qubit_gates(self, circuit: YourCircuitType) -> int:
# Count multi-qubit gates in your circuit type
pass
- Import your class in the
compilers
module's src/ucc_bench/compilers/init.py to ensure the@register
decorator runs to register the class.
You can now use "my-compiler-id"
in your TOML suite specification.
Observables can be used to calculate some expectation value on a circuit before/after compilation, and optionally under the presence of noise. Observables are implemented as functions that return an Operator based on the number of qubits in the circuit. To add a new observable:
- Define a function that takes the number of qubits and returns a
qiskit.quantum_info.Operator
. - Register it using the
@register.observable
decorator in a suitable module (e.g.,ucc_bench/simulation/observables.py
or a new file imported there).
from ..registry import register
from qiskit.quantum_info import Operator
@register.observable("my-observable-id")
def create_my_observable(num_qubits: int) -> Operator:
# Logic to create the Qiskit Operator
pass
You can now use "my-observable-id"
as the measurement value in the [benchmarks.simulate]
section of your TOML file.
Output metrics are more general measures you can calculate on a circuit after compilation and simulation. They take in the raw circuits and noise model, and are responsible for calculating the corresponding simulation metrics. To add a new output metric:
- Define a function that takes the uncompiled
Qiskit
circuit, compiledQiskit
circuit, and the noise model, and returns aSimulationMetrics
object. - Register it using the
@register.output_metric
decorator in a suitable module (e.g., a new file imported inucc_bench/simulation/__init__.py
).
from ..registry import register
from ..results import SimulationMetrics
from qiskit import QuantumCircuit
from qiskit_aer.noise import NoiseModel
@register.output_metric("my-metric-id")
def calculate_my_metric(
uncompiled_circuit: QuantumCircuit,
compiled_circuit: QuantumCircuit,
noise_model: NoiseModel,
) -> SimulationMetrics:
# Logic to calculate ideal/noisy values for both circuits
uncompiled_ideal_val = ...
compiled_ideal_val = ...
uncompiled_noisy_val = ...
compiled_noisy_val = ...
return SimulationMetrics(
uncompiled_ideal=uncompiled_ideal_val,
compiled_ideal=compiled_ideal_val,
uncompiled_noisy=uncompiled_noisy_val,
compiled_noisy=compiled_noisy_val,
)
You can now use "my-metric-id"
as the measurement value in the [benchmarks.simulate]
section of your TOML file.
To upgrade the version of a non-UCC compiler
- Create a new branch in
ucc-bench
- Run
uv lock --upgrade-package <package>
to upgrade to the latest compatible version. You might need to edit the package constraints inpyproject.toml
if those would prevent upgrading - Run the tests
uv run pytest
and ensure everything runs. - Open a Pull Request with these changes, and add a label
preview-benchmark-results
to see the impact on performance. - After review, merge the pull request.
As discussed in results/README.md
, we install specific git hash versions of ucc
in this repository.
This enables benchmarking pre-release versions of ucc
.
If for some reason you want to manually upgrade to a specific version of ucc
, run the steps above for
upgrading non-UCC compilers. Instead of (2), run uv add git+https://github.com/unitaryfoundation/ucc@<hash>
where <hash>
is the git commit hash in the ucc
repo you want to install. If developing in a fork of ucc
, you would run uv add git+https://github.com/<github_username>/ucc@<hash>
.
Updating a bench marksuite means editing the corresponding .toml
file. You might add a new circuit, or
change the expectation value calculated. In either case, it is important to change the suite_version
in the
file to indicate a change occured, as the results may no longer be comparable to prior results.
Currently, the only place this version is referenced is in the post comments PR, as diffing may not be meaningful. But in the future, other reporting workflows may need to handle changes to benchmarks.
As an example, suppose you want to add a new circuit to an existing benchmark. You would
- Add the QASM for that circuit to the corresponding benchmarks/circuits directory. If you have a python script that you used to generate the circuit, add that to the benchmarks/scripts directory.
- Edit the corresponding benchmarks
.toml
file to add a a new stanze for the benchmark, e.g.
[[benchmarks]]
id = "my_new_benchmark"
description = "New Benchmark"
qasm_file = "circuits/path_to/new_benchmark.qasm"
This repository also houses the standard results for UCC development. These are stored in
the top-level results
directory and are run on a dedicated GitHub runner for consistency
between runs. The ucc-bench
application generally stores benchmark results as JSON files
in path {out_dir}/{runner_name}/{suite_id}/{uid_date}/{uid}.json
. There are also sibling
CSV files showing a summary of performance data. See the README.md
for more information on how those results are stored and relate to git history.
At this time, only the compilation
and simulation
benchmark suites are run on an automated basis. To explore layout
benchmarks, you should follow the instructions above to run locally.
ucc-bench
is distributed under GNU Affero General Public License version 3.0(AGPLv3).
Parts of ucc contain code or modified code that is part of Qiskit or Qiskit Benchpress, which are distributed under the Apache 2.0 license.