-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/dev network #31
Conversation
…ch for the KYVE pool config
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
|
GitGuardian id | Secret | Commit | Filename | |
---|---|---|---|---|
6857130 | Generic Private Key | f75a8f0 | dev-network/assets/nginx/certs/arweave.net.key | View secret |
🛠 Guidelines to remediate hardcoded secrets
- Understand the implications of revoking this secret by investigating where it is used in your code.
- Replace and store your secret safely. Learn here the best practices.
- Revoke and rotate this secret.
- If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.
To avoid such incidents in the future consider
- following these best practices for managing and storing secrets including API keys and other credentials
- install secret detection on pre-commit to catch secret before it leaves your machine and ease remediation.
🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.
Our GitHub checks need improvements? Share your feedbacks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All looks very good. Great job - left some comments on logic that stood out to me.
const toKey = parseInt(key, 10); | ||
const fromKey = toKey - config.itemTimeRange; // First iteration over the cache, will use the first nextKey -- ie. 1000 | ||
const toTimestamp = parseInt(key, 10) * 1000; | ||
const fromTimestamp = toTimestamp - 1000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we keep this itemTimeRange
- ie. Enabling the Pool's configuration to determine the range that the Validator Network queries for?
This way we can manipulate the way the Validator Network works without needing to change code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation means that the key
is a time in UNIX epoch format (number of seconds since Jan 1 1970). Each data item has messages published within a time range between key
seconds and (key+1) seconds.
If we we will manipulate the multiplier eventually, each key will have meaning of different number of seconds stored by a data item identified by that key.
Perhaps i misunderstood the idea of itemTimeRange
option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The goal of itemTimeRange
is to enable each data item to be made up of X time range, rather than just a fixed 1 second time range.
ie. In the case we have low network usage - this way be wise to configure as itemTimeRange = 5
for 5 second time ranges for each data item.
I think this can give us a configurable option to manage the load on the network.
The query interface will still be indexed by time, as it's the Broker layer is completely separate.
However, we can leave this as is for now - and add this option back at a later date if required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest to get back to this discussion when we will implement bundle downloading and unpacking mechanism when a client queries for a data that not in a broker's cache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed - however, I don't expect the Client to ever be responsible for querying directly from Arweave, but rather Broker Nodes will hydrate their database with data from Arweave relevant to the query request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, sure this is a Broker's responsibility. I just mean this hydration will be initialized by a query sent by a client.
@@ -113,60 +105,42 @@ export default class Runtime implements IRuntimeExtended { | |||
validationDataItem: DataItem | |||
): Promise<boolean> { | |||
const proposedDataItemHash = sha256( | |||
Buffer.from(JSON.stringify(proposedDataItem)) | |||
Buffer.from(JSON.stringify(proposedDataItem.value.messages)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is going to skip the validation of the Report inside of the Last message all the time?
I think we need to revert to the original here - no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We add the report
to the last item in the bundle inside summarizeDataBundle
function which called by KYVE runtime when it produces a proposal and when it validates a proposal. BUT, when KYVE runtime validates a data item (by calling validateDataItem
) it calls summarizeDataBundle
after it. So, at the moment of item validation the item does not have a report data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Damn, I was confused in that case.
I thought validateDataItem
was called after summarizeDataBundle
.
This means we need to include the sha256(Buffer.from(JSON.stringify(report)))
inside of bundle summary - so that there's validation of the report too.
ie. In the Runtime's summarizeDataBundle
, we'll need to return lastItem.key + "_" + report_hash
;
This way the report is validated here https://github.com/KYVENetwork/kyvejs/blob/7eae47e8310262b636b9fd78ff585b1a22089234/common/protocol/src/methods/validate/validateBundleProposal.ts#L218
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good option. I will modify it as proposed.
@@ -13,7 +13,6 @@ export interface IRuntimeExtended extends IRuntime { | |||
export interface IConfig { | |||
systemStreamId: string; | |||
sources: string[]; | |||
itemTimeRange: number; // Some range in unix time between each data item |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above note.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see the comment above
No description provided.