Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stress-testing and tooling #609

Open
igamigo opened this issue Jan 10, 2025 · 1 comment
Open

Stress-testing and tooling #609

igamigo opened this issue Jan 10, 2025 · 1 comment
Milestone

Comments

@igamigo
Copy link
Collaborator

igamigo commented Jan 10, 2025

We need to build and integrate tooling that will help us generate heavy load against the node, measure performance, and catch regressions over time. This will involve:

  • Pre-loading the databases with a large number of accounts, notes, and any other necessary entities to simulate a realistic production state. This will probably imply an executable CLI that seeds the DB accordingly.
  • Building load tests for specific endpoints to see how the performance is impacted.
  • Probably instrumenting code more to collect metrics and logs where needed (related: Expose metrics for each service #144)

I think we want to build toward automated load tests that measure request latency (average, p95/p99?) and performance across the stack, with resource utilization (CPU, memory, disk I/O) under various levels of request concurrency. We could start by creating a tool that seeds the database, and then pick a user flow to focus on (such as sync state) in order to start extracting performance information. Collecting and comparing performance metrics as part of automated workflows (maybe with DB snapshots) in order to catch potential performance regressions would be nice as well.

This issue is just a general draft but can serve as a discussion for defining more granular work items

@bobbinth
Copy link
Contributor

One way to start is to create a binary that would create blocks and send them to the store as quickly as possible. These blocks would contain notes generated by the faucet account, and also new accounts consuming these notes. It will probably take a few hours to generate enough data this way, but I think that may be OK.

The advantages of this would be that it should be relatively easy to put together and it should be relatively efficient (i.e., we don't need to execute the actual transactions or prove anything). We would also gather valuable metrics from the process itself (e.g, how block insertion times change as the database size grows).

We could then save the data somewhere and use it for subsequent tests.

@bobbinth bobbinth added this to the v0.8 milestone Jan 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

No branches or pull requests

2 participants