You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For a chunk producer, when they produce a chunk, they call verify_and_charge_transaction when they pull out transactions from the transaction pool. What it does here effectively is to compute the state update as of after all the transactions are applied. This computation also includes signature verification. However, the state update is then thrown away and when the chunk is applied, for the transaction portion, the same computation is done again. In the stateless validation context, this means that it takes chunk producers extra time before they can start distributing state witness. This becomes a big problem when a chunk contains thousands of transactions, as time spent on processing transactions becomes nontrivial.
To address this problem, I suggest caching the results on applying transactions after a chunk is produced, including the state update and state witness, so that a chunk producer doesn't need to duplicate the computation. Likely something related to congestion control and/or bandwidth scheduler needs to be adjusted to make sure the computation done is exactly the same between chunk production and chunk application.
The text was updated successfully, but these errors were encountered:
For a chunk producer, when they produce a chunk, they call
verify_and_charge_transaction
when they pull out transactions from the transaction pool. What it does here effectively is to compute the state update as of after all the transactions are applied. This computation also includes signature verification. However, the state update is then thrown away and when the chunk is applied, for the transaction portion, the same computation is done again. In the stateless validation context, this means that it takes chunk producers extra time before they can start distributing state witness. This becomes a big problem when a chunk contains thousands of transactions, as time spent on processing transactions becomes nontrivial.To address this problem, I suggest caching the results on applying transactions after a chunk is produced, including the state update and state witness, so that a chunk producer doesn't need to duplicate the computation. Likely something related to congestion control and/or bandwidth scheduler needs to be adjusted to make sure the computation done is exactly the same between chunk production and chunk application.
The text was updated successfully, but these errors were encountered: