Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions src/vip_tests/performance/test_load.feature
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
@performance
Feature: Concurrent user load testing
As a Posit Team administrator
I want to verify that each product handles multiple concurrent authenticated users
So that the deployment performs acceptably under realistic user load

Scenario Outline: Connect handles <users> concurrent authenticated users
Given Connect is configured in vip.toml
When I run a load test with <users> concurrent users against Connect
Then the load test success rate is at least 95 percent
And the load test p95 response time is within the configured threshold

Examples:
| users |
| 10 |
| 20 |
| 50 |
| 100 |

Scenario Outline: Workbench handles <users> concurrent users
Given Workbench is configured in vip.toml
When I run a load test with <users> concurrent users against Workbench
Then the load test success rate is at least 95 percent
And the load test p95 response time is within the configured threshold

Examples:
| users |
| 10 |
| 20 |
| 50 |
| 100 |

Scenario Outline: Package Manager handles <users> concurrent users
Given Package Manager is configured in vip.toml
When I run a load test with <users> concurrent users against Package Manager
Then the load test success rate is at least 95 percent
And the load test p95 response time is within the configured threshold

Examples:
| users |
| 10 |
| 20 |
| 50 |
| 100 |
117 changes: 117 additions & 0 deletions src/vip_tests/performance/test_load.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
"""Step definitions for concurrent user load tests.

These tests simulate multiple authenticated users making real API requests
simultaneously, verifying that each product handles concurrent user load
acceptably. Unlike the health-check concurrency tests, every request here
carries authentication credentials and exercises a real user-facing endpoint.

Each product is tested at 10, 20, 50, and 100 concurrent users.
"""

from __future__ import annotations

import statistics
import time
from concurrent.futures import ThreadPoolExecutor, as_completed

import httpx
from pytest_bdd import parsers, scenarios, then, when

scenarios("test_load.feature")


# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------


def _run_load_test(url: str, headers: dict, n: int) -> list[dict]:
"""Fire *n* authenticated GET requests concurrently and return results."""

def _fetch():
start = time.monotonic()
try:
resp = httpx.get(url, headers=headers, timeout=30)
return {
"elapsed": time.monotonic() - start,
"status": resp.status_code,
"error": None,
}
except Exception as exc:
return {
"elapsed": time.monotonic() - start,
"status": None,
"error": str(exc),
}

with ThreadPoolExecutor(max_workers=n) as pool:
futures = [pool.submit(_fetch) for _ in range(n)]
# as_completed yields results in completion order; that's fine since we
# only use aggregate statistics (success rate, p95) on the full list.
return [f.result() for f in as_completed(futures)]


# ---------------------------------------------------------------------------
# When steps
# ---------------------------------------------------------------------------


@when(
parsers.parse("I run a load test with {users:d} concurrent users against Connect"),
target_fixture="load_test_results",
)
def load_test_connect(users, vip_config):
url = f"{vip_config.connect.url}/__api__/v1/content"
headers = {"Authorization": f"Key {vip_config.connect.api_key}"}
return _run_load_test(url, headers, users)


@when(
parsers.parse("I run a load test with {users:d} concurrent users against Workbench"),
target_fixture="load_test_results",
)
def load_test_workbench(users, vip_config):
url = f"{vip_config.workbench.url}/api/server/settings"
headers = {"Authorization": f"Key {vip_config.workbench.api_key}"}
return _run_load_test(url, headers, users)


@when(
parsers.parse("I run a load test with {users:d} concurrent users against Package Manager"),
target_fixture="load_test_results",
)
def load_test_pm(users, vip_config):
url = f"{vip_config.package_manager.url}/__api__/repos"
headers = {"Authorization": f"Bearer {vip_config.package_manager.token}"}
return _run_load_test(url, headers, users)


# ---------------------------------------------------------------------------
# Then steps
# ---------------------------------------------------------------------------


@then("the load test success rate is at least 95 percent")
def load_success_rate(load_test_results):
total = len(load_test_results)
successes = sum(
1
for r in load_test_results
if r["error"] is None and r["status"] is not None and r["status"] < 400
)
rate = successes / total if total else 0.0
assert rate >= 0.95, (
f"Load test success rate was {rate:.0%} ({successes}/{total} requests succeeded)"
)


@then("the load test p95 response time is within the configured threshold")
def load_p95_response_time(load_test_results, performance_config):
elapsed_times = [r["elapsed"] for r in load_test_results]
if len(elapsed_times) < 2:
# Not enough data points to compute quantiles; use the single value directly.
p95 = elapsed_times[0] if elapsed_times else 0.0
else:
p95 = statistics.quantiles(elapsed_times, n=20)[18] # 95th percentile
threshold = performance_config.p95_response_time
assert p95 <= threshold, f"Load test p95 response time was {p95:.2f}s (threshold: {threshold}s)"
2 changes: 1 addition & 1 deletion uv.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

36 changes: 36 additions & 0 deletions validation_docs/demo-load-tests.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# feat(performance): add concurrent user load tests for each product

*2026-03-26T01:45:59Z by Showboat 0.6.1*
<!-- showboat-id: 254e5410-20b4-4683-b23b-3d836a1ac03c -->

Implemented load tests (issue #117) that simulate concurrent authenticated users against each Posit Team product. Unlike the existing health-check concurrency tests, these tests use product credentials to make real API calls: Connect lists content, Workbench queries server settings, Package Manager lists repos. A new 'load_users' config field (default: 10) controls the number of simulated concurrent users.

```bash
uv run pytest src/vip_tests/performance/test_load.py --collect-only -q 2>&1 | grep -v UserWarning | grep -v 'Config file' | grep -v 'vip_cfg'
```

```output
src/vip_tests/performance/test_load.py::test_connect_load
src/vip_tests/performance/test_load.py::test_workbench_load
src/vip_tests/performance/test_load.py::test_pm_load

3 tests collected in 0.01s
```

```bash
ruff check src/ src/vip_tests/ selftests/ examples/ && ruff format --check src/ src/vip_tests/ selftests/ examples/ && echo 'All lint checks passed'
```

```output
All checks passed!
90 files already formatted
All lint checks passed
```

```bash
uv run pytest selftests/ -q 2>&1 | grep -E '^[0-9]+ passed'
```

```output
95 passed, 2 warnings in 0.64s
```
Loading