You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We need to build and integrate tooling that will help us generate heavy load against the node, measure performance, and catch regressions over time. This will involve:
Pre-loading the databases with a large number of accounts, notes, and any other necessary entities to simulate a realistic production state. This will probably imply an executable CLI that seeds the DB accordingly.
Building load tests for specific endpoints to see how the performance is impacted.
I think we want to build toward automated load tests that measure request latency (average, p95/p99?) and performance across the stack, with resource utilization (CPU, memory, disk I/O) under various levels of request concurrency. We could start by creating a tool that seeds the database, and then pick a user flow to focus on (such as sync state) in order to start extracting performance information. Collecting and comparing performance metrics as part of automated workflows (maybe with DB snapshots) in order to catch potential performance regressions would be nice as well.
This issue is just a general draft but can serve as a discussion for defining more granular work items
The text was updated successfully, but these errors were encountered:
One way to start is to create a binary that would create blocks and send them to the store as quickly as possible. These blocks would contain notes generated by the faucet account, and also new accounts consuming these notes. It will probably take a few hours to generate enough data this way, but I think that may be OK.
The advantages of this would be that it should be relatively easy to put together and it should be relatively efficient (i.e., we don't need to execute the actual transactions or prove anything). We would also gather valuable metrics from the process itself (e.g, how block insertion times change as the database size grows).
We could then save the data somewhere and use it for subsequent tests.
We need to build and integrate tooling that will help us generate heavy load against the node, measure performance, and catch regressions over time. This will involve:
I think we want to build toward automated load tests that measure request latency (average, p95/p99?) and performance across the stack, with resource utilization (CPU, memory, disk I/O) under various levels of request concurrency. We could start by creating a tool that seeds the database, and then pick a user flow to focus on (such as sync state) in order to start extracting performance information. Collecting and comparing performance metrics as part of automated workflows (maybe with DB snapshots) in order to catch potential performance regressions would be nice as well.
This issue is just a general draft but can serve as a discussion for defining more granular work items
The text was updated successfully, but these errors were encountered: