Skip to content

feat(performance): add concurrent user load tests for each product#118

Draft
Copilot wants to merge 3 commits intomainfrom
copilot/design-load-tests-plan
Draft

feat(performance): add concurrent user load tests for each product#118
Copilot wants to merge 3 commits intomainfrom
copilot/design-load-tests-plan

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented Mar 26, 2026

Issue #117 requested load tests focused on concurrent users (not large sessions). The existing test_concurrency scenarios only hit anonymous health-check endpoints — these new tests simulate authenticated users making real API calls simultaneously.

New: test_load scenarios

Three Scenario Outline definitions under src/vip_tests/performance/, one per product, each running at 10, 20, 50, and 100 concurrent users (12 test cases total):

Product Endpoint Auth
Connect GET /__api__/v1/content API key
Workbench GET /api/server/settings API key
Package Manager GET /__api__/repos ******

Each scenario asserts:

  • Success rate ≥ 95%
  • p95 response time ≤ performance.p95_response_time

The user count is passed directly from the Examples table into each When step, so test IDs are clearly labeled (e.g. test_connect_handles_users_concurrent_authenticated_users[100]).

Auto-skips per product when not configured, consistent with the rest of the performance suite.

Original prompt

work on the load tests issue. design a plan and ask me questions to refine it.


⚡ Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast.

Copilot AI changed the title [WIP] Design load tests plan and gather requirements feat(performance): add concurrent user load tests for each product Mar 26, 2026
Copilot AI requested a review from statik March 26, 2026 01:49
@statik
Copy link
Copy Markdown
Collaborator

statik commented Mar 26, 2026

@copilot run these for 10, 20, 50, 100 ussrs

Copy link
Copy Markdown
Contributor Author

Copilot AI commented Mar 26, 2026

@copilot run these for 10, 20, 50, 100 ussrs

Done in a9bae63. Converted each scenario to a Scenario Outline with an Examples table containing 10, 20, 50, and 100 — giving 12 test cases total (4 user counts × 3 products). The user count is now passed directly into each When step via parsers.parse, so the test IDs are clearly labeled (e.g. test_connect_handles_users_concurrent_authenticated_users[100]). The load_users config field was removed since the levels are now fixed in the feature file.

@github-actions
Copy link
Copy Markdown

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants