Conversation
Contributor
Contributor
There was a problem hiding this comment.
Pull request overview
Adds a new chat-focused performance benchmarking harness to the repository, including a CI workflow for comparing baseline vs test builds. This fits into the existing scripts/ perf tooling by providing repeatable end-to-end chat timing/rendering/memory measurements backed by a deterministic mock LLM server.
Changes:
- Introduce chat perf regression runner + leak checker scripts under
scripts/chat-perf/(Playwright + CDP-based metrics). - Add a local mock LLM server and shared utilities for build resolution, launch args/env, and statistical comparison.
- Wire up npm scripts, CI workflow, and documentation for running these benchmarks.
Show a summary per file
| File | Description |
|---|---|
| scripts/chat-perf/test-chat-perf-regression.js | Runs scenario-based chat perf benchmarks and compares against a baseline build. |
| scripts/chat-perf/test-chat-mem-leaks.js | Sends repeated chat messages in one session and detects monotonic heap/DOM growth. |
| scripts/chat-perf/common/utils.js | Shared helpers for build download/launch configuration and statistics. |
| scripts/chat-perf/common/mock-llm-server.js | Local deterministic streaming server that emulates Copilot/OpenAI-style endpoints. |
| package.json | Adds perf:chat and perf:chat-leak npm entry points. |
| .gitignore | Ignores .chat-perf-data output directory. |
| .github/workflows/chat-perf.yml | Adds a manual workflow to compare baseline vs test build performance and publish artifacts/summary. |
| .github/skills/chat-perf/SKILL.md | Documents how to run the new perf and leak tools and interpret results. |
Copilot's findings
- Files reviewed: 7/8 changed files
- Comments generated: 8
pwang347
commented
Apr 14, 2026
deepak1556
reviewed
Apr 15, 2026
roblourens
previously approved these changes
Apr 17, 2026
amunger
reviewed
Apr 17, 2026
amunger
reviewed
Apr 17, 2026
amunger
approved these changes
Apr 17, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Add chat performance benchmarking harness
Introduces an end-to-end chat performance benchmarking and memory leak detection framework, backed by a deterministic mock LLM server and a CI workflow for automated regression testing.
What's included
Perf regression runner (
npm run perf:chat)--production-build), and release builds with mismatch detection--resume) to accumulate samples for higher confidenceMemory leak checker (
npm run perf:chat-leak)Mock LLM server (mock-llm-server.js)
/models,/models/session,/chat/completions, etc.) for deterministic, zero-latency responsesCI workflow (chat-perf.yml)
workflow_dispatchwith configurable inputs (baseline/test commits, runs, threshold, settings overrides)${{ inputs.* }}interpolation) to prevent script injectionScenarios (perf-scenarios.js)
text-only,large-codeblock,rapid-stream,mixed-markdowntool-read-file,tool-edit-file,tool-terminalthinking-response,multi-turn-user,long-conversationOther changes
perf:chatandperf:chat-leaknpm scripts