Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zero Knowledge Fold for Nova IVC from HyperNova #330

Open
wants to merge 15 commits into
base: main
Choose a base branch
from

Conversation

jkwoods
Copy link

@jkwoods jkwoods commented Oct 2, 2024

An implementation of the zero-knowledge fold from Appendix D.4 of the HyperNova paper.

Changes "from the bottom up":

src/r1cs/mod.rs:

  • new fold_relaxed() methods for RelaxedR1CSInstance and RelaxedR1CSWitness (Previously, we were only able to fold() RelaxedR1CSInstances/Witnesses with regular non-relaxed R1CSInstance/Witnesses. Since the random layer is a Relaxed instance, we need to be able to fold two RelaxedR1CSInstances/Witnesses together.)
  • new commit_T_relaxed() method. Again, the same as commit_T(), but for two RelaxedR1CSInstances/Witnesses. Takes error terms into account.
  • new sample_random_instance_witness() method to sample a random RelaxedR1CSInstance/Witness pair. Uses OsRng for now. Based on Construction 5 from Appendix D in the HyperNova paper.
  • new test for sample_random_instance_witness()

src/nifs.rs:

  • new prove_relaxed() and verify_relaxed() that are similar to their prove() and verify() counterparts, except for two RelaxedR1CSInstances/Witnesses. They used the fold_relaxed() and commit_T_relaxed() methods from src/r1cs/mod.rs. They use a new constant, NUM_FE_FOR_RO_RELAXED, created in src/constants.rs, since we have to absorb more into the random oracle.
  • new methods in the test module for testing relaxed proving and verifying and folding in a random instance.

src/lib.rs:

  • new testing functions for the random IVC layer

For RecursiveSNARK:

  • new randomizing_fold() method. This is like prove_step() except instead of proving a new F iteration, it folds in a random layer. It returns a new RandomLayerRecursive struct with information about the random fold, rather than modifying the RecursiveSNARK struct. randomizing_fold() is public right now (for completeness), but shouldn’t be used if the user wants to compress their snark, as calling CompressedSNARK’s prove_randomizing() will call randomizing_fold() itself. If there is a better way to make this tension less confusing, let me know.
  • new verify_randomizing() method. This method exists for completeness, but probably wouldn't ever be used, as the randomizing layer is the last IVC fold, and will therefore be verified by the CompressedSNARK verifier.

For CompressedSNARK:

  • the CompressedSNARK struct as a new optional RandomLayerCompressed field. This holds information about the last random fold.
  • new prove_randomizing() method. the old prove() method is renamed to be prove_regular() and we make a new prove() function that automatically selects the correct prove (regular or randomizing), based on a boolean switch. This boolean switch is the only change that would need to be made to existing code that calls the Nova library. However, if such a change would cause issues, I could restructure this code to be randomizing or not based on a rust feature flag (or something else).
  • new verify_randomizing() method. Similar to proving, the old verify() is renamed to verify_regular() and a new verify() function automatically selects the correct one, based on whether the CompressedSNARK struct has the RandomLayerCompressed field.

benches/compressed-snark.rs:

  • new benchmarks for a compressed snark with a randomizing layer

@jkwoods
Copy link
Author

jkwoods commented Oct 2, 2024

@microsoft-github-policy-service agree company="University of Pennsylvania"

@srinathsetty srinathsetty self-requested a review October 3, 2024 21:39
@srinathsetty
Copy link
Collaborator

Hi @jkwoods this is great! thanks!

I'll take a deeper look at the PR. There's however something important missing: the commitments need to be hiding (e.g., they must include a blind at the time of commitment). Otherwise, we won't get the desired hiding property from this transformation.

@jkwoods
Copy link
Author

jkwoods commented Oct 3, 2024

@srinathsetty Oh, of course, my bad. I'll get right on that. I assume the blinds also need to be optional/"toggleable".

@srinathsetty
Copy link
Collaborator

srinathsetty commented Oct 3, 2024 via email

@jkwoods
Copy link
Author

jkwoods commented Oct 4, 2024

@srinathsetty It seems straightforward to me to add blinds to the Pedersen commitments, and keep track of how they are folded in the NIFS. But I am a little lost on what happens once the folding is done and we want to compress the IVC proof with a non zero knowledge SNARK. I assume we don't want use have blinded commitments in the spartan SNARK (or the IPA). So is there a point (after the folding in of the random/ZK layer in the IVC?) where we "de-blind" the commitments the Relaxed R1CS instance (by revealing $h^r$?)

@jkwoods
Copy link
Author

jkwoods commented Oct 10, 2024

^ Above code adds the blinding into the (Pedersen) commitments. (Relaxed)R1CSWitnesses save their blinds and fold them in accordance with HyperNova instructions. CompressedSNARK proving/verifying will now "derandomize" the RelaxedR1CSInstance/Witness pairs after the randomized fold, before the snark. Since (Relaxed)R1CSInstance/Witnesses are blinded by default now, regular proving/verifying (without the random fold), and the DirectSNARK proving/verifying also have to derandomize their RelaxedR1csInstance/Witness pairs.

On the commitment level, even though Pedersen CommitmentKeys are now always set up with the $h$ term, the user has the option to commit() to something or commit_with_blinding(). The r1cs level uses commit_with_blinding(), while the rest of the (non-blinded) code uses regular commit(). Note for HyperKZG, commit_with_blinding() is the same as commit(), and the new derandomize() method doesn't do anything. (Changing the CommitmentTrait required addressing HyperKZG.)

Note the Pedersen CommitmentKeys $h$ term is stored in an Option, so that reinterpret_commitments_as_ck() can be handled without strange assumptions. Let me know if you had a more elegant implementation in mind for any of this.

Two things still to deal with:

  1. extra hashing - I haven't done this yet
  2. test_pp_digest now fails. I suspect this is because I added the $h$ field to the Pedersen CommitmentKeys. Is there a "safe" way to recalculate those expected digest values, or is it as straightforward as just running the new PublicParams (with new CommitmentKeys) through the DigestComputer and getting the new result?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants