IMPORTANT While much care has been put into making the program robust against malicious inputs, this has not been independently test yet.
Comparator is a trustworthy judge for Lean proofs. It relies having an existing Lean installation as
well as two additional binaries in PATH
:
landrun
lean4export
at a version that is compatible with whatever Lean version your project is targeting.
The comparator is configured through a JSON file:
{
"challenge_module": "Challenge",
"solution_module": "Solution",
"theorem_names": ["todo1"],
"permitted_axioms": ["propext", "Quot.sound", "Classical.choice"]
}
Where Challenge.lean
contains at least a theorem named todo1
that has a sorry
(or any other proof)
and Solution.lean
is provided by a party trying to convince you that they have proven todo1
by
writing out the same theorem but with a proper proof attached.
Given the following assumptions:
- Only the
Solution.lean
file is controlled by a potentially malicious party, all other files including most cruciallylakefile.toml
/lakefile.lean
andChallenge.lean
are controlled by you. - You have not previously tried to compile the
Solution
file (as that might compromise yourChallenge
file to make it seem like you are looking for a different proof than you actually are) - You have the
landrun
andlean4export
binary inPATH
landrun
works correctly on your system- The Lean kernel is correct (in the future we will add support for running different kernels as well to increase trust further)
- You are not running this under a privileged user
If the following command succeeds:
lake env path/to/comparator/binary path/to/config.json
All theorems in Solution
that are listed in theorem_names
are guaranteed to:
- Prove the same statement as provided in
Challenge
- Use no more axioms than listed in
permitted_axioms
- Be accepted by the Lean kernel
Note that running lake exe cache get
to download a Mathlib cache is acceptable before running the
comparator if you trust the cache to not be modified as to, e.g. contain different definitions from
the one you would expect to make proofs trivial.
We generally adopt a policy of not loading olean files as they just get mmaped into our address space and then dereferenced and are as such a potential point of attack for sophisticated adversaries.
The comparator performs the following steps to ensure these properties:
- Build
Challenge
usinglake
in alandrun
sandbox that has:
- read access to the entire file system and write access to
/dev
- write access to the
.lake
directory of the project
- Run
lean4export
on the producedChallenge.olean
in alandrun
sandbox that has:
- read access to the entire file system and write access to
/dev
- Repeat the same build sandboxed and export sandboxed steps with
Solution
- Verify that all declarations used in the statement of all relevant theorems in
Challenge
are the same as in theSolution
environment. - Verify that the body of all relevant theorems in the
Solution
environment only uses axioms listed inpermitted_axioms
- Replay the
Solution
environment into the Lean kernel. Doing this within the same process as the comparator should be safe as the worst thing that can happen at this point is an exploit that makes the kernel accept when it should reject and that same exploit should also be applicable from within an external process.
Note that as Challenge
is trusted, both the sandbox and lean4export step for Challenge
are not
necessary to the best of our knowledge. We still adopt these rather free measures as additional
paranoia in case an adversary comes up with a mean of attack anyway.