You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now that we are signing transactions on a Mithril test network for the Cardano mainnet, we want to identify bottlenecks and prepare optimizations for the signature/proving of the Cardano transactions. This process will be iterative.
What
Here are the areas that we need to optimize:
Minimum impact of the activation of the new type of data signed for Cardano transactions on the release-mainnet network:
Avoid blocking the production of certificates for current signed entity types
Implement warm-up strategy for new comers in the network that takes advantage of the 2 first epochs downtime
Proofs generation does not impact the production of certificates for the network
Minimum footprint on the signer infrastructure (disk space, memory, CPU requirements):
Assess the maximum disk space required by the Cardano transactions store on the signer once pruning is activated (in GB)
Make sure this limit is not exceeded during warm-up phase
Assess the memory and CPU consumption of the signer during import/signature of the transactions (maybe offload memstores of Merkle trees on disk for the signature on the signer? this would avoid too much memory consumption)
Maximum throughput of proofs served by the aggregator REST API:
Assess/implement optimizations needed to enhance the proof serving throughput (step by step, iterative, most efficient optimizations at first)
Model the expected traffic coming from a third party provider serving transactions by address for e.g. a light wallet:
Computation based on their average/peak traffic
Model proofs generation throughput (and number of transactions to prove per proof)
Model certificate download throughput
Caching strategy for transaction proofs and certificate on the client for optimal pressure on the aggregator
Determine maximum number of transactions allowed when creating a proof
Horizontal scaling strategy in worst case scenario where maximum throughput of an aggregator is not enough (e.g. aggregator 'slaves' for serving proofs)
Why
Now that we are signing transactions on a Mithril test network for the Cardano
mainnet
, we want to identify bottlenecks and prepare optimizations for the signature/proving of the Cardano transactions. This process will be iterative.What
Here are the areas that we need to optimize:
release-mainnet
network:2
first epochs downtimeHow
Minimum impact when activated on
release-mainnet
:Maximum proofs generation throughput
Pre-compute sub Merkle proofs and use them in the block range Merkle map?Cache last(Later if needed)N
certified transactions (database records, ...)Minimum SPO infrastructure footprint
Infrastructure optimizations
testing-mainnet
#1849 Release2430
distribution #1830Later
The text was updated successfully, but these errors were encountered: