-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(crypto): CRP-2648 CRP-2600 VetKD API and testing improvements #3283
base: master
Are you sure you want to change the base?
Conversation
@@ -1908,6 +1908,18 @@ declare_muln_vartime_affine_impl_for!(G1Projective, G1Affine); | |||
impl_debug_using_serialize_for!(G1Affine); | |||
impl_debug_using_serialize_for!(G1Projective); | |||
|
|||
impl G1Affine { | |||
/// See draft-irtf-cfrg-bls-signature-01 §4.2.2 for details on BLS augmented signatures | |||
pub fn augmented_hash(pk: &G2Affine, data: &[u8]) -> Self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might make sense to add tests, given that this is new functionality here in the BLS type crate.
/// | ||
/// If the result is Ok(), the returned key is guaranteed to be valid. | ||
/// Returns the combined key, if it is valid. | ||
/// Does not take the nodes individual public keys as input. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: nodes -> nodes' (this typo comes from my own commit 😄 )
/// Filters the valid shares from the given ones, and combines them into a valid key, if possible. | ||
/// The returned key is guaranteed to be valid. | ||
/// Returns an error if not sufficient shares are given or if not sufficient valid shares are given. | ||
/// Takes also the node's individual public keys as input, which means the individual public keys |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: node's -> nodes' (again, from my own commit)
// This check is mostly to catch the case where the reconstruction_threshold was | ||
// somehow incorrect. | ||
if c.is_valid(master_pk, derivation_path, did, tpk) { | ||
Ok(c) | ||
} else { | ||
Err(EncryptedKeyCombinationError::InvalidShares) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering what could cause the reconstruction_threshold
to be incorrect.
If I understand correctly, this would be an implementation error in our code, right?
According to the spec for combine_encrypted_key_shares
, the reconstruction threshold passed to EncryptedKey::combine_valid_shares
is
let reconstruction_threshold = transcript_data_from_store.public_coefficients().coefficients.len();
So, this will come from the transcript that Consensus loads. The questions is, if this is good enough not to do the validity check, and thus save two multi-pairings.
But thinking about it again, we are here in combine_valid_shares
, and this is only called if the optimistic combination with combine_all
fails (see spec), so combine_valid_shares
should be called very infrequently, so maybe the additional validity check is actually somewhat irrelevant.
if c.is_valid(master_pk, derivation_path, did, tpk) { | ||
Ok(c) | ||
} else { | ||
Err(EncryptedKeyCombinationError::InvalidShares) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would a dedicated error variant make sense here, so that we can distinguish between actual invalid shares, and the case where the reconstruction threshold was incorrect?
} | ||
} | ||
|
||
fn random_subset<R: rand::Rng, T: Clone>(rng: &mut R, items: &[T], include: usize) -> Vec<T> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we use SliceRandom::choose_multiple
here instead of implementing this by hand, e.g., like this:
assert!(include <= items.len());
let result: Vec<_> = items.choose_multiple(rng, include).cloned().collect();
assert_eq!(result.len(), include);
result
let eks_bytes = eks.serialize(); | ||
let eks2 = EncryptedKeyShare::deserialize(eks_bytes).unwrap(); | ||
assert_eq!(eks, eks2); | ||
let other_did = rng.gen::<[u8; 24]>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: maybe add assert_ne!(proto.did, other_did);
assert!( | ||
rec_threshold >= threshold, | ||
"Recovery only works with sufficient quorum" | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about also checking that ek
is indeed valid, rather than relying on the fact that combine_all
did this check:
assert!(ek.is_valid(
&setup.master_pk,
&proto.derivation_path,
&proto.did,
&setup.transport_pk
));
EncryptedKey::is_valid
is also not used anywhere else currently.
let mut shares = random_subset(rng, &node_eks, rec_threshold - 1); | ||
shares.append(&mut random_subset(rng, &node_eks_wrong_did, 1)); | ||
shares.shuffle(rng); | ||
assert!(proto.combine_all(&shares).is_err()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because both node_eks
and node_eks_wrong_did
start from index 0, we actually see EncryptedKeyCombinationError::DuplicateNodeIndex
quite often? These are not the cases we are interested here I guess, so should we ensure we don't have these cases here?
Also, for a while, this combination (rightfully) fails with InsufficientShares
: it might make sense to ensure that this happens for the right cases.
Somewhat related: should we add tests ensuring that combine_*
fails with EncryptedKeyCombinationError::DuplicateNodeIndex
on duplicate node indices?
|
||
let derived_key = tsk | ||
.decrypt_and_hash(&ek, &dpk, did, 32, b"aes-256-gcm-siv") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here decrypt_and_hash
is currently not tested any more. I believe we have some tests involving decrypt_and_hash
in ic-vetkd-utils
. Are the ic-vetkd-utils
maybe anyway the better place to test this? Do I understand correctly that the TransportSecretKey
is actually more like a test utility here? If so, we could even consider removing the duplication and using the one from ic-vetkd-utils
here...but not fully sure.
Continuation of #2895 with additional testing