This file contains information on the measurements that we have taken when running our Metamath (MM) proof checker on different Metamath files in various zero-knowledge Virtual Machines (zkVMs).
- Benchmarking of $\pi^2$ ZK Metamath checkers
- Our experiment
- How to run our tests
- Our results
- Disclaimers
- One possible optimization
We run our experiment on a large set of Metamath files with various input sizes, where the input size of a Metamath file is the number of its containing symbols (tokens).
The list of zkVMs that we have considered is the following one:
The implementations of our Metamath proof checker in each of the above zkVMs can be found in the checker/mm folder in the main pi2 repo. They each consist of a guest program which runs on the specific virtual machine and a host program which is our interface to actually running the guest, providing input for it and processing its output. The Metamath files that we are testing can be found in the mm-files folder.
Out zkVMs that we are considering only Risc0, zkWASM, SP1 and Cairo provide GPU support. Still, we were only able to run Risc0 and zkWASM with GPU support due to internal setup issues for SP1 and evelvind code base for Cairo.
5 out of the 7 tested zkVMs (Risc0, zkWASM, Jolt, SP1, Nexus) provide a Rust compiler. Therefore, for them we have been able to develop and use a shared library for checking Metamath proofs, and thus the comparison among these 5 should be considered more precise, as they share most of the code. While in Cairo and Lurk we implemented the same program (a metamath checker), they were implemented independently of the rust code base and independent of each-other, so the comparison betwen them and the the other 5 should be taken with a grain of salt.
- Cairo: the Lambdaworks prover, main branch, commit a591186 (the current version, while faster, does not yet support Cairo)
- Jolt:
main
branch, commit 3b14242 - Lurk:
main
branch, commit 57c48b9 - Nexus: tag v0.2.3
- Risc0: version 1.0.5
- SP1:
dev
branch, commit 2c78683 - zkWASM:
main
branch, commit f5acf8c
For each of the zkVMs we've been using the default type of certificate offered by that particular zkVM. For example, the default means composite certificates for Risc0 and SP1 and succinct certificates for zkWASM. We've experimented with generating succinct certificates for Risc0 and compressed certificates for SP1; the elapsed times to generate the shorter certificates seemed to be ~1.6 times larger than that for composite certificates.
Our full benchmark suite consists of 1225 Metamath files, split in two classes:
- A class generated from standard Metamath databases (1), (2) by dividing them into small lemmas. These tests can be found under
checker/mm/common/metamath-files/benchmark_mm/small
in the Docker image; - A class of
$\pi^2$ proofs of execution of various lengths, which prove the correctness of an ERC20-like program written in IMP. These tests can be found underchecker/mm/common/metamath-files/benchmark_mm/imp
.- If you want to find more on how we generate mathematical proofs of program executions, check out our proof generation demos and our documentation.
Even though we have set up the Docker image for you to run the tests, we have provided the execution code in the checkers directory.
We provide all our checkers and benchmarks in a publicly available Docker image. Before you begin, make sure you have a working installation of Docker, following the instructions here.
Once done, please run the following command to pull the
docker pull ghcr.io/pi-squared-inc/pi2-zk-mm-checkers:latest
To run our benchmarks, first attach to a shell in our Docker container:
docker run -it ghcr.io/pi-squared-inc/pi2-zk-mm-checkers bash
As a sanity check, you can run the following command, which will return all ZK backends available in the container:
$ poetry run check-proof -a
Available backends: ['cairo', 'jolt', 'risc0', 'sp1', 'nexus', 'lurk']
The basic syntax for invoking the benchmarking program is:
poetry run check-proof <zkVM> <path to MM proof/s> [--zk-csv PATH] [--log PATH] -v
- Setting the
zkVM
argument toall
will invoke all checkers on the Metamath files provided. Otherwise, it should be one ofcairo
,jolt
,risc0
,sp1
,nexus
orlurk
. - The optional
--zk-csv
parameter will output a benchmarking report to CSV. - The optional
--log
paramter will save the output of the zkVMs to file.
Notes:
- The Docker image does not provide GPU-accelerated builds for any of the zkVMs.
- The checkers are timed-out at 15 minutes per Metamath file.
The following command will run the full benchmark suite on all zkVMs, generate a report in report.csv
, and save the zkVM logs to zkvms.log
:
poetry run check-proof all ../mm/common/metamath-files/benchmark_mm --zk-csv report.csv --log zkvms.log -v
For a lighter measurement, you can also point the program to the directory containing our small benchmarks:
poetry run check-proof all ../mm/common/metamath-files/benchmark_mm/small --zk-csv report.csv --log zkvms.log -v
You can also provide the path of a single Metamath database:
poetry run check-proof all ../mm/common/metamath-files/benchmark_mm/small/hol/hol_idi.mm --zk-csv report.csv --log zkvms.log -v
By replacing the all
keyword with the name of a zkVM, you can execute our checker only in the requested VM:
poetry run check-proof <cairo | jolt | risc0 | sp1 | nexus | lurk | wasm> ...
To have a more fair comparison among different Metamath files, we pre-tokenize the input
and count the number of resulting tokens as the Input size
in the columns below.
Note: Nexus is not pictured in the graph above because even on our smallest input execution was quite slow (512 seconds).
We selected eight representative files and choose to present their corresponding statistics.
Measurements taken on a AMD EPYC 9354 32-Core CPU (64 threads), with 4x NVIDIA GeForce RTX 4090 24GB GPU's and 248GB RAM.
All times below are measured in seconds. Where "TO / OOM" is indicated, it means the checker was either timed out (after 900 seconds), or it exceeded system memory limits.
Benchmark | Input size | Proving time | Verification time |
---|---|---|---|
hol_idi.mm | 39 | 0.913 | 0.029 |
hol_wov.mm | 147 | 5.975 | 0.229 |
hol_ax13.mm | 508 | 25.623 | 0.993 |
hol_cbvf.mm | 1786 | 110.230 | 4.321 |
45.erc20transfer_success_tm_0_6.mm | 6249 | 228.487 | 9.102 |
25.erc20transfer_success_tm_0_9.mm | 21332 | TO / OOM | TO / OOM |
3.erc20transfer_success_tm_0.mm | 73862 | TO / OOM | TO / OOM |
9.erc20transfer_success.mm | 258135 | TO / OOM | TO / OOM |
For the smallest file, hol_idi.mm, which consists of 18 lines of Metamath declarations and proofs (39 tokens), Nexus proving time is 512 seconds, hence we did not generate a table for Nexus.
Benchmark | Input size | Proving time | Verification time |
---|---|---|---|
hol_idi.mm | 39 | 3.170 | 0.174 |
hol_wov.mm | 147 | 5.350 | 0.156 |
hol_ax13.mm | 508 | 10.290 | 0.229 |
hol_cbvf.mm | 1786 | 28.530 | 0.194 |
45.erc20transfer_success_tm_0_6.mm | 6249 | 30.000 | 0.200 |
25.erc20transfer_success_tm_0_9.mm | 21332 | 91.870 | 0.218 |
3.erc20transfer_success_tm_0.mm | 73862 | TO / OOM | TO / OOM |
9.erc20transfer_success.mm | 258135 | TO / OOM | TO / OOM |
Benchmark | Input size | Proving time |
---|---|---|
hol_idi.mm | 39 | 0.924 |
hol_wov.mm | 147 | 5.588 |
hol_ax13.mm | 508 | 18.167 |
hol_cbvf.mm | 1786 | 195.757 |
We have encountered out-of-memory issues with the next largest Metamath file in our benchmarking suite. However, memory consumption seems to not be monotonic in Metamath input size, as is the general trend with the other checkers. Thus, we do not include the other benchmarks in the table for Lurk.
See this thread on Argument Zulip for further discussion.
Benchmark | Input size | Proving time | Verification time |
---|---|---|---|
hol_idi.mm | 39 | 0.443 | 0.016 |
hol_wov.mm | 147 | 0.553 | 0.017 |
hol_ax13.mm | 508 | 0.769 | 0.018 |
hol_cbvf.mm | 1786 | 2.070 | 0.035 |
45.erc20transfer_success_tm_0_6.mm | 6249 | 2.090 | 0.035 |
25.erc20transfer_success_tm_0_9.mm | 21332 | 3.950 | 0.053 |
3.erc20transfer_success_tm_0.mm | 73862 | 15.990 | 0.225 |
9.erc20transfer_success.mm | 258135 | 63.740 | 0.885 |
Benchmark | Input size | Proving time | Verification time |
---|---|---|---|
hol_idi.mm | 39 | 3.140 | 0.016 |
hol_wov.mm | 147 | 5.770 | 0.017 |
hol_ax13.mm | 508 | 11.220 | 0.018 |
hol_cbvf.mm | 1786 | 33.900 | 0.035 |
45.erc20transfer_success_tm_0_6.mm | 6249 | 33.480 | 0.035 |
25.erc20transfer_success_tm_0_9.mm | 21332 | 66.280 | 0.053 |
3.erc20transfer_success_tm_0.mm | 73862 | 276.440 | 0.225 |
9.erc20transfer_success.mm | 258135 | TO / OOM | TO / OOM |
Benchmark | Input size | Proving time | Verification time |
---|---|---|---|
hol_idi.mm | 39 | 7.260 | 0.203 |
hol_wov.mm | 147 | 12.220 | 0.199 |
hol_ax13.mm | 508 | 17.450 | 0.199 |
hol_cbvf.mm | 1786 | 34.860 | 0.208 |
45.erc20transfer_success_tm_0_6.mm | 6249 | 34.790 | 0.207 |
25.erc20transfer_success_tm_0_9.mm | 21332 | 43.340 | 0.338 |
3.erc20transfer_success_tm_0.mm | 73862 | 133.150 | 0.731 |
9.erc20transfer_success.mm | 258135 | 456.790 | 2.490 |
Benchmark | Input size | Proving time | Verification time |
---|---|---|---|
hol_idi.mm | 39 | 33.080 | 4.059 |
hol_wov.mm | 147 | 33.020 | 4.045 |
hol_ax13.mm | 508 | 34.090 | 4.063 |
hol_cbvf.mm | 1786 | 76.550 | 4.092 |
45.erc20transfer_success_tm_0_6.mm | 6249 | 79.030 | 5.063 |
25.erc20transfer_success_tm_0_9.mm | 21332 | 120.720 | 5.072 |
3.erc20transfer_success_tm_0.mm | 73862 | 351.660 | 8.034 |
9.erc20transfer_success.mm | 258135 | TO / OOM | TO / OOM |
We believe there are several reasons why our code may be improved.
- Some of the zkVMs that we are considering (e.g. Jolt, Nexus) are still under active development and our corresponding proof checker implementations could benefit from future improvements.
- We are building the Cairo prover from an old commit.
- Some of the zkVMs (Cairo, Lurk) are using specialized languages which opens up potential for optimizations unique to their particular languages. As such, we did not include the graphs from these zkVMs in the "Proof file size VS Proof time" section as all zkVMs are Rust-based except for these zkVMs. But we did include the benchmark measurements under the "ZK Backends" section.
All the implementations could actually benefit from hand crafted optimizations, since the zkVM field is such an active research field. If you have any ideas for improvements or spot areas that could be optimized, don't hesitate to jump in. We welcome contributions!