Skip to content

feat(tests-e2e): add global JavaScript error tracking for e2e tests #866

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

Karan-Palan
Copy link
Contributor

@Karan-Palan Karan-Palan commented Aug 5, 2025

#584

Implements a comprehensive JavaScript error tracking for all e2e tests, as suggested in the comment #840 (comment)

image

It may help as:

  • Captures uncaught exceptions, unhandled promise rejections, and console errors
  • Existing tests work unchanged, just update import statements
  • Configurable Behavior, Detailed Reporting, Multiple Usage Patterns

Usage

// Before
import { test, expect } from "@playwright/test";

// After  
import { test, expect } from "../baseTest";
// JS errors are automatically tracked

<!-- This is an auto-generated comment: release notes by coderabbit.ai -->
## Summary by CodeRabbit

* **New Features**
  * Introduced automatic JavaScript error tracking in end-to-end tests, providing detailed error reports and test failure options based on environment settings.
  * Added new test suites and utilities to detect and report JavaScript errors and unhandled promise rejections during UI testing.
  * Provided new npm scripts for flexible JavaScript error handling in test runs.

* **Tests**
  * Added comprehensive test cases to verify JavaScript error detection and handling under various scenarios.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Copy link

vercel bot commented Aug 5, 2025

@Karan-Palan is attempting to deploy a commit to the Antiwork Team on Vercel.

A member of the Team first needs to authorize it.

Copy link

jacquez-bot bot commented Aug 5, 2025

Thank you for your contribution! Your PR is missing required screenshots of your test suite passing locally, which are specifically mentioned in our guidelines. Please include screenshots showing your tests passing to help with the review process.

Copy link
Contributor

coderabbitai bot commented Aug 5, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

This change introduces comprehensive JavaScript error tracking into Playwright end-to-end (e2e) tests. It adds new npm scripts for controlling JS error handling, a utility class for tracking errors, Playwright fixtures to enforce error checks, and new test suites that demonstrate and validate the error tracking functionality during automated browser testing.

Changes

Cohort / File(s) Change Summary
NPM Scripts for JS Error Handling
package.json
Adds two npm scripts: one to run e2e tests without failing on JS errors (test:e2e:no-js-errors), and another to run only the JS error tracking test suite (test:e2e:js-errors-only).
Playwright Test Fixture and Exports
tests/e2e/baseTest.ts
Adds a custom Playwright test fixture that wraps the page object with JS error tracking; attaches error details to reports and controls failure based on an environment variable. Re-exports expect.
JS Error Tracking Utility
tests/e2e/utils/jsErrorTracker.ts
Introduces JSErrorTracker class to monitor, collect, and format JS errors from both client and server sides during Playwright tests.
Test Helper Functions
tests/e2e/utils/test-helpers.ts
Adds helper functions to check for JavaScript errors and to instantiate the error tracker in Playwright tests.
JS Error Demo Test Suite
tests/e2e/examples/jsErrorDemo.spec.ts
Adds a new test suite demonstrating detection of JS errors and behavior with/without strict error enforcement, including skipped tests for error scenarios.
JS Error Tracking Test Suite
tests/e2e/examples/jsErrorTracking.spec.ts
Adds a comprehensive test suite for automatic and manual JS error tracking, including tests for environment variable-based error handling and detailed error reporting.

Sequence Diagram(s)

sequenceDiagram
    participant Tester as Playwright Test
    participant Page as Browser Page
    participant Tracker as JSErrorTracker

    Tester->>Tracker: Instantiate with Page
    Tracker->>Page: Inject error listeners (window.onerror, unhandledrejection)
    Page-->>Tracker: Emit JS errors (pageerror, console.error)
    Tracker->>Tracker: Collect and store errors

    Tester->>Page: Perform test actions (navigate, interact)
    Page-->>Tracker: JS errors (if any)
    Tester->>Tracker: getAllErrors()
    alt Errors found
        Tracker-->>Tester: Return error details
        Tester->>Tester: Attach error info, fail or warn based on env var
    else No errors
        Tracker-->>Tester: Return empty
        Tester->>Tester: Test passes
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~15–20 minutes

Possibly related PRs

  • feat: e2e playwright tests #561: Adds initial Playwright e2e testing setup including scripts and dependencies; related as it provides the foundation for the new JS error tracking scripts and fixtures introduced here.

Suggested labels

priority

Suggested reviewers

  • binary-koan

Poem

In the warren of code, a rabbit peers,
Catching JS errors with its floppy ears.
Playwright tests now hop with care,
Trapping bugs that once hid there.
With scripts and trackers, the hunt’s begun—
No JavaScript error can hide or run!
🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 76d4c4e and 9bc8b7b.

📒 Files selected for processing (1)
  • tests/e2e/examples/jsErrorTracking.spec.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/e2e/examples/jsErrorTracking.spec.ts
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@Karan-Palan
Copy link
Contributor Author

@binary-koan if this is helpful, I can work on it after all basic test PRs are merged

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (6)
tests/e2e/utils/test-helpers.ts (1)

102-110: Potential timing issue with error detection.

The function creates a new JSErrorTracker and immediately checks for errors, but this may miss errors that occurred before the tracker's event listeners were established. Consider either:

  1. Using an existing tracker from the test fixture
  2. Adding a small delay after tracker creation
  3. Documenting that this function only catches errors after its creation
 export async function checkForJavaScriptErrors(page: Page): Promise<void> {
   const tracker = new JSErrorTracker(page);
+  // Allow time for event listeners to be established
+  await page.waitForTimeout(100);
   const errors = await tracker.getAllErrors();
tests/e2e/baseTest.ts (1)

21-26: Configurable failure behavior with detailed messaging.

The environment variable control is well-implemented, allowing tests to continue when FAIL_ON_JS_ERRORS=false. However, the error message in the expect assertion is quite long and could be more concise.

-        expect(errors.length, `Found ${errors.length} JavaScript error(s) during test execution. Set FAIL_ON_JS_ERRORS=false to disable this check.\n\nErrors:\n${errorDetails}`).toBe(0);
+        expect(errors.length, `Found ${errors.length} JavaScript error(s). Set FAIL_ON_JS_ERRORS=false to disable.\n\nErrors:\n${errorDetails}`).toBe(0);
tests/e2e/examples/jsErrorDemo.spec.ts (3)

6-16: Consider making skipped tests runnable with environment control.

The skipped test demonstrates error detection but can't be run. Consider making it conditional based on an environment variable so it can be executed when needed for validation.

-  test.skip("should fail when JavaScript errors occur", async ({ page }) => {
+  test("should fail when JavaScript errors occur", async ({ page }) => {
+    test.skip(process.env.RUN_ERROR_DEMO_TESTS !== "true", "Set RUN_ERROR_DEMO_TESTS=true to run this test");
+    
     await page.goto("/mine");

18-30: Apply same conditional approach to promise rejection test.

This test has the same issue as the previous one - it's always skipped when it could be conditionally runnable.


32-45: Improve conditional logic clarity.

The test logic works but could be clearer about its purpose. Consider adding a comment explaining the test's behavior.

   test("should pass when FAIL_ON_JS_ERRORS is false", async ({ page }) => {
+    // This test demonstrates that console errors don't fail tests when FAIL_ON_JS_ERRORS=false
     await page.goto("/mine");
     await page.waitForLoadState("networkidle");
     
     if (process.env.FAIL_ON_JS_ERRORS === "false") {
       await page.evaluate(() => {
         console.error("This is a test console error");
       });
     }
tests/e2e/utils/jsErrorTracker.ts (1)

84-89: Consider making clearErrors async for complete error clearing.

The client-side error clearing is not awaited, which means the method may return before client-side errors are actually cleared.

Consider making this method async to ensure complete clearing:

-  clearErrors() {
+  async clearErrors() {
    this.errors = [];
-    this.page.evaluate(() => {
+    try {
+      await this.page.evaluate(() => {
        (window as any).__jsErrors = [];
-    }).catch(() => {});
+      });
+    } catch {
+      // Ignore evaluation errors (e.g., page closed)
+    }
  }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1663f3d and a8dde4d.

📒 Files selected for processing (6)
  • package.json (1 hunks)
  • tests/e2e/baseTest.ts (1 hunks)
  • tests/e2e/examples/jsErrorDemo.spec.ts (1 hunks)
  • tests/e2e/examples/jsErrorTracking.spec.ts (1 hunks)
  • tests/e2e/utils/jsErrorTracker.ts (1 hunks)
  • tests/e2e/utils/test-helpers.ts (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
{package.json,*.config.{js,ts,cjs,mjs},.*rc,.*rc.{js,json,yml,yaml}}

📄 CodeRabbit Inference Engine (.cursorrules)

Don't modify config files or package.json unless explicitly told to do so

Files:

  • package.json
{**/package.json,**/*.config.{js,ts,json},**/.eslintrc*,**/tsconfig*.json,**/vite.config.{js,ts},**/vitest.config.{js,ts}}

📄 CodeRabbit Inference Engine (.github/copilot-instructions.md)

Don't modify config files or package.json unless explicitly told to do so

Files:

  • package.json
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
PR: antiwork/helper#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-18T22:36:28.971Z
Learning: Applies to **/*.test.{ts,tsx} : We have existing TypeScript tests using vitest. Fix the existing tests if required, but don't add any new tests unless explicitly told to do so
Learnt from: CR
PR: antiwork/helper#0
File: .cursorrules:0-0
Timestamp: 2025-07-18T22:36:16.902Z
Learning: Applies to **/*.test.{ts,tsx} : We have existing TypeScript tests using vitest. Fix the existing tests if required, but don't add any new tests unless explicitly told to do so
📚 Learning: applies to **/*.test.{ts,tsx} : we have existing typescript tests using vitest. fix the existing tes...
Learnt from: CR
PR: antiwork/helper#0
File: .cursorrules:0-0
Timestamp: 2025-07-18T22:36:16.902Z
Learning: Applies to **/*.test.{ts,tsx} : We have existing TypeScript tests using vitest. Fix the existing tests if required, but don't add any new tests unless explicitly told to do so

Applied to files:

  • tests/e2e/baseTest.ts
  • tests/e2e/examples/jsErrorDemo.spec.ts
  • package.json
  • tests/e2e/utils/test-helpers.ts
  • tests/e2e/examples/jsErrorTracking.spec.ts
📚 Learning: applies to {package.json,*.config.{js,ts,cjs,mjs},.*rc,.*rc.{js,json,yml,yaml}} : don't modify confi...
Learnt from: CR
PR: antiwork/helper#0
File: .cursorrules:0-0
Timestamp: 2025-07-18T22:36:16.902Z
Learning: Applies to {package.json,*.config.{js,ts,cjs,mjs},.*rc,.*rc.{js,json,yml,yaml}} : Don't modify config files or package.json unless explicitly told to do so

Applied to files:

  • package.json
📚 Learning: applies to {**/package.json,**/*.config.{js,ts,json},**/.eslintrc*,**/tsconfig*.json,**/vite.config....
Learnt from: CR
PR: antiwork/helper#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-18T22:36:28.971Z
Learning: Applies to {**/package.json,**/*.config.{js,ts,json},**/.eslintrc*,**/tsconfig*.json,**/vite.config.{js,ts},**/vitest.config.{js,ts}} : Don't modify config files or package.json unless explicitly told to do so

Applied to files:

  • package.json
📚 Learning: when reviewing prs with database migrations (db/drizzle/*.sql files), ensure both _journal.json and ...
Learnt from: binary-koan
PR: antiwork/helper#0
File: :0-0
Timestamp: 2025-07-01T23:48:38.337Z
Learning: When reviewing PRs with database migrations (db/drizzle/*.sql files), ensure both _journal.json and *_snapshot.json files are committed. If they're missing, recommend running `pnpm db:generate` and `pnpm db:migrate`.

Applied to files:

  • package.json
🧬 Code Graph Analysis (3)
tests/e2e/baseTest.ts (1)
tests/e2e/utils/jsErrorTracker.ts (1)
  • JSErrorTracker (12-115)
tests/e2e/examples/jsErrorDemo.spec.ts (1)
tests/e2e/baseTest.ts (2)
  • test (4-28)
  • expect (30-30)
tests/e2e/utils/test-helpers.ts (1)
tests/e2e/utils/jsErrorTracker.ts (1)
  • JSErrorTracker (12-115)
🔇 Additional comments (14)
package.json (1)

42-43: Well-structured npm scripts for JS error control.

The new scripts provide clear functionality:

  • test:e2e:no-js-errors allows running tests without failing on JS errors
  • test:e2e:js-errors-only focuses on JS error tracking validation

Both integrate properly with the existing e2e infrastructure.

tests/e2e/utils/test-helpers.ts (1)

112-114: Simple and clean factory function.

The createJSErrorTracker function provides a clean interface for creating tracker instances.

tests/e2e/baseTest.ts (2)

4-8: Well-structured fixture extension setup.

The fixture properly extends the base test and initializes the JSErrorTracker. The pattern follows Playwright best practices.


10-20: Comprehensive error reporting and attachment.

The error collection and reporting logic is thorough:

  • Collects all errors after test execution
  • Attaches detailed error logs to test reports
  • Provides clear console output with test context
tests/e2e/examples/jsErrorTracking.spec.ts (2)

8-37: Well-structured automatic error tracking tests.

The tests properly demonstrate automatic JS error detection during common UI interactions. The responsive design test is particularly thorough.


88-101: Good environment variable integration test.

This test properly validates the FAIL_ON_JS_ERRORS environment variable behavior and uses the correct test fixture.

tests/e2e/utils/jsErrorTracker.ts (8)

3-10: Well-defined interface with comprehensive error properties.

The JSError interface appropriately captures all necessary error information including optional properties for different error types.


12-19: Clean constructor implementation with automatic setup.

The constructor properly initializes the tracker and automatically sets up error listeners, providing a good developer experience.


63-72: Robust client-side error retrieval with proper error handling.

The method correctly uses page.evaluate to access client-side errors and gracefully handles evaluation failures by returning an empty array.


74-76: Good defensive programming with array copying.

Returning a copy of the errors array prevents external modification of the internal state.


78-82: Effective error aggregation method.

The method properly combines server-side and client-side errors with a logical ordering.


91-98: Well-designed error checking methods with clear distinction.

The separation between hasErrors() (server-side only, synchronous) and hasAnyErrors() (comprehensive, async) provides useful options for different use cases.


100-114: Comprehensive error formatting with clean output.

The method provides well-structured, readable error output with proper handling of optional properties and clear error numbering.


39-61: Potential issue with client-side error array reinitialization.

The current implementation may lose client-side errors when navigating between pages, as the addInitScript runs on every page load and the array initialization (window as any).__jsErrors = (window as any).__jsErrors || [] could be problematic if the array already exists but gets cleared by navigation.

Consider this approach to preserve errors across navigations:

this.page.addInitScript(() => {
+  // Preserve existing errors across page navigations
+  const existingErrors = (window as any).__jsErrors || [];
+  (window as any).__jsErrors = existingErrors;
+
  window.addEventListener("error", (event) => {
    (window as any).__jsErrors = (window as any).__jsErrors || [];
    (window as any).__jsErrors.push({

Alternatively, ensure errors are retrieved before navigation occurs in your test implementation.

Likely an incorrect or invalid review comment.

@Karan-Palan Karan-Palan marked this pull request as draft August 5, 2025 22:06
Signed-off-by: karan-palan <[email protected]>
@Karan-Palan Karan-Palan marked this pull request as ready for review August 6, 2025 14:42
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
tests/e2e/examples/jsErrorTracking.spec.ts (1)

56-80: Fix inconsistent fixture usage and unnecessary dynamic import.

This test has two issues that were identified in previous reviews:

  1. Uses playwrightTest instead of the custom test fixture, bypassing automatic error tracking
  2. Uses dynamic import when static import would be cleaner

Apply this diff to fix both issues:

+import { JSErrorTracker } from "../utils/jsErrorTracker";
+
-    playwrightTest("advanced error handling with custom logic", async ({ page }) => {
-      const { JSErrorTracker } = await import("../utils/jsErrorTracker");
+    test("advanced error handling with custom logic", async ({ page }) => {
       const tracker = new JSErrorTracker(page);
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a8dde4d and 76d4c4e.

📒 Files selected for processing (1)
  • tests/e2e/examples/jsErrorTracking.spec.ts (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: CR
PR: antiwork/helper#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-18T22:36:28.971Z
Learning: Applies to **/*.test.{ts,tsx} : We have existing TypeScript tests using vitest. Fix the existing tests if required, but don't add any new tests unless explicitly told to do so
Learnt from: CR
PR: antiwork/helper#0
File: .cursorrules:0-0
Timestamp: 2025-07-18T22:36:16.902Z
Learning: Applies to **/*.test.{ts,tsx} : We have existing TypeScript tests using vitest. Fix the existing tests if required, but don't add any new tests unless explicitly told to do so
📚 Learning: applies to **/*.test.{ts,tsx} : we have existing typescript tests using vitest. fix the existing tes...
Learnt from: CR
PR: antiwork/helper#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-18T22:36:28.971Z
Learning: Applies to **/*.test.{ts,tsx} : We have existing TypeScript tests using vitest. Fix the existing tests if required, but don't add any new tests unless explicitly told to do so

Applied to files:

  • tests/e2e/examples/jsErrorTracking.spec.ts
🔇 Additional comments (3)
tests/e2e/examples/jsErrorTracking.spec.ts (3)

8-38: LGTM! Proper use of custom test fixture.

The automatic JS error tracking tests correctly use the custom test fixture from baseTest, which enables automatic error detection. The responsive design test comprehensively covers different viewport sizes and properly waits for network idle state.


41-55: LGTM! Proper manual error tracking implementation.

The manual error checking test correctly uses the custom test fixture and helper functions (createJSErrorTracker and checkForJavaScriptErrors) to demonstrate manual error tracking at specific points during test execution.


83-97: LGTM! Environment variable test properly implemented.

The test correctly uses the custom test fixture and demonstrates how the error tracking behavior can be controlled via the FAIL_ON_JS_ERRORS environment variable. The test performs appropriate user interactions to validate the functionality.

Karan-Palan and others added 2 commits August 6, 2025 20:20
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: KaranPalan  <[email protected]>
@binary-koan
Copy link
Collaborator

In concept this looks great! Yeah I think we should wait for more of the other test PRs to be merged and then enable it for all test files here (and fix any existing JS errors so we don't need separate scripts)

@binary-koan binary-koan marked this pull request as draft August 7, 2025 20:38
@slavingia slavingia closed this Aug 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants