-
Notifications
You must be signed in to change notification settings - Fork 233
feat(tests-e2e): add global JavaScript error tracking for e2e tests #866
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: karan-palan <[email protected]>
@Karan-Palan is attempting to deploy a commit to the Antiwork Team on Vercel. A member of the Team first needs to authorize it. |
Thank you for your contribution! Your PR is missing required screenshots of your test suite passing locally, which are specifically mentioned in our guidelines. Please include screenshots showing your tests passing to help with the review process. |
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughThis change introduces comprehensive JavaScript error tracking into Playwright end-to-end (e2e) tests. It adds new npm scripts for controlling JS error handling, a utility class for tracking errors, Playwright fixtures to enforce error checks, and new test suites that demonstrate and validate the error tracking functionality during automated browser testing. Changes
Sequence Diagram(s)sequenceDiagram
participant Tester as Playwright Test
participant Page as Browser Page
participant Tracker as JSErrorTracker
Tester->>Tracker: Instantiate with Page
Tracker->>Page: Inject error listeners (window.onerror, unhandledrejection)
Page-->>Tracker: Emit JS errors (pageerror, console.error)
Tracker->>Tracker: Collect and store errors
Tester->>Page: Perform test actions (navigate, interact)
Page-->>Tracker: JS errors (if any)
Tester->>Tracker: getAllErrors()
alt Errors found
Tracker-->>Tester: Return error details
Tester->>Tester: Attach error info, fail or warn based on env var
else No errors
Tracker-->>Tester: Return empty
Tester->>Tester: Test passes
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~15–20 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Note ⚡️ Unit Test Generation is now available in beta!Learn more here, or try it out under "Finishing Touches" below. 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
@binary-koan if this is helpful, I can work on it after all basic test PRs are merged |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (6)
tests/e2e/utils/test-helpers.ts (1)
102-110
: Potential timing issue with error detection.The function creates a new
JSErrorTracker
and immediately checks for errors, but this may miss errors that occurred before the tracker's event listeners were established. Consider either:
- Using an existing tracker from the test fixture
- Adding a small delay after tracker creation
- Documenting that this function only catches errors after its creation
export async function checkForJavaScriptErrors(page: Page): Promise<void> { const tracker = new JSErrorTracker(page); + // Allow time for event listeners to be established + await page.waitForTimeout(100); const errors = await tracker.getAllErrors();tests/e2e/baseTest.ts (1)
21-26
: Configurable failure behavior with detailed messaging.The environment variable control is well-implemented, allowing tests to continue when
FAIL_ON_JS_ERRORS=false
. However, the error message in the expect assertion is quite long and could be more concise.- expect(errors.length, `Found ${errors.length} JavaScript error(s) during test execution. Set FAIL_ON_JS_ERRORS=false to disable this check.\n\nErrors:\n${errorDetails}`).toBe(0); + expect(errors.length, `Found ${errors.length} JavaScript error(s). Set FAIL_ON_JS_ERRORS=false to disable.\n\nErrors:\n${errorDetails}`).toBe(0);tests/e2e/examples/jsErrorDemo.spec.ts (3)
6-16
: Consider making skipped tests runnable with environment control.The skipped test demonstrates error detection but can't be run. Consider making it conditional based on an environment variable so it can be executed when needed for validation.
- test.skip("should fail when JavaScript errors occur", async ({ page }) => { + test("should fail when JavaScript errors occur", async ({ page }) => { + test.skip(process.env.RUN_ERROR_DEMO_TESTS !== "true", "Set RUN_ERROR_DEMO_TESTS=true to run this test"); + await page.goto("/mine");
18-30
: Apply same conditional approach to promise rejection test.This test has the same issue as the previous one - it's always skipped when it could be conditionally runnable.
32-45
: Improve conditional logic clarity.The test logic works but could be clearer about its purpose. Consider adding a comment explaining the test's behavior.
test("should pass when FAIL_ON_JS_ERRORS is false", async ({ page }) => { + // This test demonstrates that console errors don't fail tests when FAIL_ON_JS_ERRORS=false await page.goto("/mine"); await page.waitForLoadState("networkidle"); if (process.env.FAIL_ON_JS_ERRORS === "false") { await page.evaluate(() => { console.error("This is a test console error"); }); }
tests/e2e/utils/jsErrorTracker.ts (1)
84-89
: Consider making clearErrors async for complete error clearing.The client-side error clearing is not awaited, which means the method may return before client-side errors are actually cleared.
Consider making this method async to ensure complete clearing:
- clearErrors() { + async clearErrors() { this.errors = []; - this.page.evaluate(() => { + try { + await this.page.evaluate(() => { (window as any).__jsErrors = []; - }).catch(() => {}); + }); + } catch { + // Ignore evaluation errors (e.g., page closed) + } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
package.json
(1 hunks)tests/e2e/baseTest.ts
(1 hunks)tests/e2e/examples/jsErrorDemo.spec.ts
(1 hunks)tests/e2e/examples/jsErrorTracking.spec.ts
(1 hunks)tests/e2e/utils/jsErrorTracker.ts
(1 hunks)tests/e2e/utils/test-helpers.ts
(2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
{package.json,*.config.{js,ts,cjs,mjs},.*rc,.*rc.{js,json,yml,yaml}}
📄 CodeRabbit Inference Engine (.cursorrules)
Don't modify config files or package.json unless explicitly told to do so
Files:
package.json
{**/package.json,**/*.config.{js,ts,json},**/.eslintrc*,**/tsconfig*.json,**/vite.config.{js,ts},**/vitest.config.{js,ts}}
📄 CodeRabbit Inference Engine (.github/copilot-instructions.md)
Don't modify config files or package.json unless explicitly told to do so
Files:
package.json
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
PR: antiwork/helper#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-18T22:36:28.971Z
Learning: Applies to **/*.test.{ts,tsx} : We have existing TypeScript tests using vitest. Fix the existing tests if required, but don't add any new tests unless explicitly told to do so
Learnt from: CR
PR: antiwork/helper#0
File: .cursorrules:0-0
Timestamp: 2025-07-18T22:36:16.902Z
Learning: Applies to **/*.test.{ts,tsx} : We have existing TypeScript tests using vitest. Fix the existing tests if required, but don't add any new tests unless explicitly told to do so
📚 Learning: applies to **/*.test.{ts,tsx} : we have existing typescript tests using vitest. fix the existing tes...
Learnt from: CR
PR: antiwork/helper#0
File: .cursorrules:0-0
Timestamp: 2025-07-18T22:36:16.902Z
Learning: Applies to **/*.test.{ts,tsx} : We have existing TypeScript tests using vitest. Fix the existing tests if required, but don't add any new tests unless explicitly told to do so
Applied to files:
tests/e2e/baseTest.ts
tests/e2e/examples/jsErrorDemo.spec.ts
package.json
tests/e2e/utils/test-helpers.ts
tests/e2e/examples/jsErrorTracking.spec.ts
📚 Learning: applies to {package.json,*.config.{js,ts,cjs,mjs},.*rc,.*rc.{js,json,yml,yaml}} : don't modify confi...
Learnt from: CR
PR: antiwork/helper#0
File: .cursorrules:0-0
Timestamp: 2025-07-18T22:36:16.902Z
Learning: Applies to {package.json,*.config.{js,ts,cjs,mjs},.*rc,.*rc.{js,json,yml,yaml}} : Don't modify config files or package.json unless explicitly told to do so
Applied to files:
package.json
📚 Learning: applies to {**/package.json,**/*.config.{js,ts,json},**/.eslintrc*,**/tsconfig*.json,**/vite.config....
Learnt from: CR
PR: antiwork/helper#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-18T22:36:28.971Z
Learning: Applies to {**/package.json,**/*.config.{js,ts,json},**/.eslintrc*,**/tsconfig*.json,**/vite.config.{js,ts},**/vitest.config.{js,ts}} : Don't modify config files or package.json unless explicitly told to do so
Applied to files:
package.json
📚 Learning: when reviewing prs with database migrations (db/drizzle/*.sql files), ensure both _journal.json and ...
Learnt from: binary-koan
PR: antiwork/helper#0
File: :0-0
Timestamp: 2025-07-01T23:48:38.337Z
Learning: When reviewing PRs with database migrations (db/drizzle/*.sql files), ensure both _journal.json and *_snapshot.json files are committed. If they're missing, recommend running `pnpm db:generate` and `pnpm db:migrate`.
Applied to files:
package.json
🧬 Code Graph Analysis (3)
tests/e2e/baseTest.ts (1)
tests/e2e/utils/jsErrorTracker.ts (1)
JSErrorTracker
(12-115)
tests/e2e/examples/jsErrorDemo.spec.ts (1)
tests/e2e/baseTest.ts (2)
test
(4-28)expect
(30-30)
tests/e2e/utils/test-helpers.ts (1)
tests/e2e/utils/jsErrorTracker.ts (1)
JSErrorTracker
(12-115)
🔇 Additional comments (14)
package.json (1)
42-43
: Well-structured npm scripts for JS error control.The new scripts provide clear functionality:
test:e2e:no-js-errors
allows running tests without failing on JS errorstest:e2e:js-errors-only
focuses on JS error tracking validationBoth integrate properly with the existing e2e infrastructure.
tests/e2e/utils/test-helpers.ts (1)
112-114
: Simple and clean factory function.The
createJSErrorTracker
function provides a clean interface for creating tracker instances.tests/e2e/baseTest.ts (2)
4-8
: Well-structured fixture extension setup.The fixture properly extends the base test and initializes the JSErrorTracker. The pattern follows Playwright best practices.
10-20
: Comprehensive error reporting and attachment.The error collection and reporting logic is thorough:
- Collects all errors after test execution
- Attaches detailed error logs to test reports
- Provides clear console output with test context
tests/e2e/examples/jsErrorTracking.spec.ts (2)
8-37
: Well-structured automatic error tracking tests.The tests properly demonstrate automatic JS error detection during common UI interactions. The responsive design test is particularly thorough.
88-101
: Good environment variable integration test.This test properly validates the
FAIL_ON_JS_ERRORS
environment variable behavior and uses the correct test fixture.tests/e2e/utils/jsErrorTracker.ts (8)
3-10
: Well-defined interface with comprehensive error properties.The JSError interface appropriately captures all necessary error information including optional properties for different error types.
12-19
: Clean constructor implementation with automatic setup.The constructor properly initializes the tracker and automatically sets up error listeners, providing a good developer experience.
63-72
: Robust client-side error retrieval with proper error handling.The method correctly uses
page.evaluate
to access client-side errors and gracefully handles evaluation failures by returning an empty array.
74-76
: Good defensive programming with array copying.Returning a copy of the errors array prevents external modification of the internal state.
78-82
: Effective error aggregation method.The method properly combines server-side and client-side errors with a logical ordering.
91-98
: Well-designed error checking methods with clear distinction.The separation between
hasErrors()
(server-side only, synchronous) andhasAnyErrors()
(comprehensive, async) provides useful options for different use cases.
100-114
: Comprehensive error formatting with clean output.The method provides well-structured, readable error output with proper handling of optional properties and clear error numbering.
39-61
: Potential issue with client-side error array reinitialization.The current implementation may lose client-side errors when navigating between pages, as the
addInitScript
runs on every page load and the array initialization(window as any).__jsErrors = (window as any).__jsErrors || []
could be problematic if the array already exists but gets cleared by navigation.Consider this approach to preserve errors across navigations:
this.page.addInitScript(() => { + // Preserve existing errors across page navigations + const existingErrors = (window as any).__jsErrors || []; + (window as any).__jsErrors = existingErrors; + window.addEventListener("error", (event) => { (window as any).__jsErrors = (window as any).__jsErrors || []; (window as any).__jsErrors.push({Alternatively, ensure errors are retrieved before navigation occurs in your test implementation.
Likely an incorrect or invalid review comment.
Signed-off-by: karan-palan <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
tests/e2e/examples/jsErrorTracking.spec.ts (1)
56-80
: Fix inconsistent fixture usage and unnecessary dynamic import.This test has two issues that were identified in previous reviews:
- Uses
playwrightTest
instead of the customtest
fixture, bypassing automatic error tracking- Uses dynamic import when static import would be cleaner
Apply this diff to fix both issues:
+import { JSErrorTracker } from "../utils/jsErrorTracker"; + - playwrightTest("advanced error handling with custom logic", async ({ page }) => { - const { JSErrorTracker } = await import("../utils/jsErrorTracker"); + test("advanced error handling with custom logic", async ({ page }) => { const tracker = new JSErrorTracker(page);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/e2e/examples/jsErrorTracking.spec.ts
(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: CR
PR: antiwork/helper#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-18T22:36:28.971Z
Learning: Applies to **/*.test.{ts,tsx} : We have existing TypeScript tests using vitest. Fix the existing tests if required, but don't add any new tests unless explicitly told to do so
Learnt from: CR
PR: antiwork/helper#0
File: .cursorrules:0-0
Timestamp: 2025-07-18T22:36:16.902Z
Learning: Applies to **/*.test.{ts,tsx} : We have existing TypeScript tests using vitest. Fix the existing tests if required, but don't add any new tests unless explicitly told to do so
📚 Learning: applies to **/*.test.{ts,tsx} : we have existing typescript tests using vitest. fix the existing tes...
Learnt from: CR
PR: antiwork/helper#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-18T22:36:28.971Z
Learning: Applies to **/*.test.{ts,tsx} : We have existing TypeScript tests using vitest. Fix the existing tests if required, but don't add any new tests unless explicitly told to do so
Applied to files:
tests/e2e/examples/jsErrorTracking.spec.ts
🔇 Additional comments (3)
tests/e2e/examples/jsErrorTracking.spec.ts (3)
8-38
: LGTM! Proper use of custom test fixture.The automatic JS error tracking tests correctly use the custom
test
fixture frombaseTest
, which enables automatic error detection. The responsive design test comprehensively covers different viewport sizes and properly waits for network idle state.
41-55
: LGTM! Proper manual error tracking implementation.The manual error checking test correctly uses the custom
test
fixture and helper functions (createJSErrorTracker
andcheckForJavaScriptErrors
) to demonstrate manual error tracking at specific points during test execution.
83-97
: LGTM! Environment variable test properly implemented.The test correctly uses the custom
test
fixture and demonstrates how the error tracking behavior can be controlled via theFAIL_ON_JS_ERRORS
environment variable. The test performs appropriate user interactions to validate the functionality.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: KaranPalan <[email protected]>
In concept this looks great! Yeah I think we should wait for more of the other test PRs to be merged and then enable it for all test files here (and fix any existing JS errors so we don't need separate scripts) |
#584
Implements a comprehensive JavaScript error tracking for all e2e tests, as suggested in the comment #840 (comment)
It may help as:
Usage