Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add example tests for js #191

Closed
wants to merge 26 commits into from
Closed

feat: add example tests for js #191

wants to merge 26 commits into from

Conversation

utkarsh-dixit
Copy link
Collaborator

@utkarsh-dixit utkarsh-dixit commented Jun 20, 2024

PR Type

Tests, Enhancement


Description

  • Added Playwright tests to run and validate example projects.
  • Configured Playwright settings including timeout, reporter, base URL, and HTTP headers.
  • Updated package.json to include Playwright as a dependency and to run Playwright tests.
  • Added start scripts to example projects to facilitate running demos.
  • Added a file to store the results of the last test run.

Changes walkthrough 📝

Relevant files
Tests
run-tests.spec.ts
Add Playwright tests for example projects                               

js/tests/run-tests.spec.ts

  • Added a test script to run example projects using Playwright.
  • Implemented steps to build and start example projects.
  • Included assertions to validate expected output.
  • +43/-0   
    .last-run.json
    Add file to store last test run results                                   

    js/test-results/.last-run.json

    • Added file to store the results of the last test run.
    +4/-0     
    Enhancement
    playwright.config.ts
    Configure Playwright settings for testing                               

    js/playwright.config.ts

  • Configured Playwright with a timeout and reporter.
  • Set base URL and launch options for tests.
  • Added HTTP headers including authorization token.
  • +23/-0   
    package.json
    Update package.json to include Playwright tests                   

    js/package.json

  • Updated test script to run Playwright tests.
  • Added Playwright as a dependency.
  • +2/-1     
    package.json
    Add start script for e2e example                                                 

    js/examples/e2e/package.json

    • Added start script to run demo.mjs.
    +1/-0     
    package.json
    Add start script for OpenAI example                                           

    js/examples/openai/package.json

    • Added start script to run demo.mjs.
    +1/-0     
    package.json
    Add start script for LangChain example                                     

    js/examples/langchain/package.json

    • Added start script to run demo.mjs.
    +1/-0     

    💡 PR-Agent usage:
    Comment /help on the PR to get a list of all available PR-Agent tools and their descriptions

    Copy link

    PR Reviewer Guide 🔍

    ⏱️ Estimated effort to review [1-5] 3
    🧪 Relevant tests Yes
    🔒 Security concerns - Sensitive information exposure:
    The configuration file explicitly includes an authorization token using `process.env.API_TOKEN`. Ensure that this token is securely managed and not exposed in logs or error messages. Consider using secrets management tools for better security practices.
    ⚡ Key issues to review Possible Bug:
    The use of synchronous and asynchronous exec calls within the same test step could lead to race conditions or unhandled promise rejections. Consider using async/await consistently for better error handling and control flow.
    Error Handling:
    The error handling in the test script could be improved by adding more specific error messages and handling specific types of exceptions more gracefully.

    Copy link

    PR Code Suggestions ✨

    CategorySuggestion                                                                                                                                    Score
    Best practice
    Add .last-run.json to .gitignore to avoid committing frequently changing test result files

    Consider removing the .last-run.json file from version control by adding it to .gitignore,
    as it is likely to change frequently and may not be relevant to all developers.

    js/test-results/.last-run.json [1-4]

    -{
    -  "status": "passed",
    -  "failedTests": []
    -}
    +# Add `.last-run.json` to `.gitignore` file
     
    • Apply this suggestion
    Suggestion importance[1-10]: 8

    Why: Adding frequently changing files like test results to .gitignore is a best practice to keep the repository clean and relevant, which makes this suggestion very useful.

    8
    Use Playwright APIs instead of execSync and exec for better control and error handling

    Instead of using execSync and exec within the test, consider using Playwright's page.goto
    and other Playwright APIs to interact with the examples. This will provide better control
    over the test flow and error handling.

    js/tests/run-tests.spec.ts [17-35]

    -execSync(`pnpm build && cd ${exampleDir} && pnpm link ../../`);
    -exec(`pnpm build && cd ${exampleDir} && pnpm start`, (error, stdout, stderr) => {
    -  if (error) {
    -    console.error(`exec error: ${error}`);
    -    reject(error);
    -    return;
    -  }
    -  console.log(`stdout: ${stdout}`);
    -  console.error(`stderr: ${stderr}`);
    -  
    -  // Assert some stuff on stdout for test checks
    -  try {
    -    expect(stdout).toContain('Expected output');
    -    expect(stderr).toBe('');
    -    resolve();
    -  } catch (assertionError) {
    -    reject(assertionError);
    -  }
    -});
    +await page.goto(`file://${exampleDir}/index.html`);
    +// Add further interactions and assertions using Playwright APIs
    +const content = await page.content();
    +expect(content).toContain('Expected output');
     
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    Why: The suggestion to use Playwright APIs for better control and error handling is valid and improves test reliability and readability. However, it's not a critical bug fix, hence the score.

    7
    Use Playwright's built-in error handling and reporting mechanisms instead of manual try-catch blocks

    Instead of catching and logging errors within the test, consider using Playwright's
    built-in error handling and reporting mechanisms to provide more structured and
    informative test results.

    js/tests/run-tests.spec.ts [9-42]

    -try {
    -  const files = fs.readdirSync(examplesDir);
    -  // ...
    -} catch (err) {
    -  console.error(`Unable to read examples directory: ${err}`);
    -}
    +const files = await fs.promises.readdir(examplesDir);
    +// ...
     
    Suggestion importance[1-10]: 5

    Why: While using built-in error handling can make the code cleaner, the suggestion incorrectly replaces synchronous file reading with an asynchronous one without adjusting the surrounding code context, which could lead to issues.

    5
    Enhancement
    Add a pretest script to ensure dependencies are installed before running tests

    Consider adding a pretest script to ensure that the necessary dependencies are installed
    before running the tests. This can help avoid issues where tests fail due to missing
    dependencies.

    js/package.json [6-10]

     "scripts": {
    +  "pretest": "npm install",
       "test": "playwright test tests/*",
       "build": "tsc  --project . --outDir lib",
       "type-docs": "typedoc"
     },
     
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    Why: Adding a pretest script can help ensure that all dependencies are installed before tests are run, which is a good practice to prevent failures due to missing dependencies. This suggestion is relevant and enhances maintainability.

    7
    Add a postinstall script to build the project after dependencies are installed

    It might be beneficial to add a postinstall script to run npm run build after dependencies
    are installed. This ensures that the project is built and ready to use immediately after
    installation.

    js/package.json [6-10]

     "scripts": {
       "test": "playwright test tests/*",
       "build": "tsc  --project . --outDir lib",
    -  "type-docs": "typedoc"
    +  "type-docs": "typedoc",
    +  "postinstall": "npm run build"
     },
     
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    Why: The suggestion to add a postinstall script that automatically builds the project after installation is a good practice for ensuring the project is immediately usable after setup. This enhances the user experience and project readiness.

    7
    Add a prestart script to ensure dependencies are installed before starting the demo

    Consider adding a prestart script to ensure that the necessary dependencies are installed
    before starting the demo. This can help avoid runtime errors due to missing dependencies.

    js/examples/e2e/package.json [6-9]

     "scripts": {
    +  "prestart": "npm install",
       "start": "node demo.mjs",
       "test": "echo \"Error: no test specified\" && exit 1"
     },
     
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    Why: Adding a prestart script is beneficial for ensuring all necessary dependencies are installed before the demo starts, which can prevent runtime errors. This suggestion is practical and improves the robustness of the demo setup.

    7
    Add a script for running tests to the scripts section

    Consider adding a script for running tests, such as "test": "mocha" or another testing
    framework, to facilitate automated testing.

    js/examples/langchain/package.json [7-8]

     "start": "node demo.mjs",
    -"test": "echo \"Error: no test specified\" && exit 1"
    +"test": "mocha"
     
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    Why: The suggestion to replace a placeholder test script with a functional one like "mocha" is beneficial for enabling actual automated testing, which is a good practice.

    7
    Possible issue
    Add a timeout to the exec function to prevent the test from hanging indefinitely

    Add a timeout to the exec function to prevent the test from hanging indefinitely if the
    command fails to complete.

    js/tests/run-tests.spec.ts [18-35]

    -exec(`pnpm build && cd ${exampleDir} && pnpm start`, (error, stdout, stderr) => {
    +exec(`pnpm build && cd ${exampleDir} && pnpm start`, { timeout: 30000 }, (error, stdout, stderr) => {
       if (error) {
         console.error(`exec error: ${error}`);
         reject(error);
         return;
       }
       console.log(`stdout: ${stdout}`);
       console.error(`stderr: ${stderr}`);
       
       // Assert some stuff on stdout for test checks
       try {
         expect(stdout).toContain('Expected output');
         expect(stderr).toBe('');
         resolve();
       } catch (assertionError) {
         reject(assertionError);
       }
     });
     
    • Apply this suggestion
    Suggestion importance[1-10]: 6

    Why: Adding a timeout is a good practice to prevent tests from hanging, which can improve the robustness of test execution. It's a minor but useful improvement.

    6

    Copy link

    codiumai-pr-agent-pro bot commented Jul 1, 2024

    CI Failure Feedback 🧐

    (Checks updated until commit 56005cc)

    Action: JS tests

    Failed stage: Run tests [❌]

    Failed test name: tests/run-tests.spec.ts:13:9 › e2e
    tests/run-tests.spec.ts:13:9 › langchain
    tests/run-tests.spec.ts:13:9 › openai

    Failure summary:

    The action failed due to multiple errors in different tests:

  • e2e test failed because of a BadRequestError caused by a validation error in the request payload.
    The data property must be an object and should not be empty.
  • langchain test failed because the received value in the expect assertion was undefined, which is not
    allowed.
  • openai test failed due to a TypeError caused by attempting to read properties of undefined
    (specifically, no_auth property in app.yaml).

  • Relevant error logs:
    1:  ##[group]Operating System
    2:  Ubuntu
    ...
    
    649:  COMPOSIO_BASE_URL: ***
    650:  OPENAI_API_KEY: ***
    651:  ##[endgroup]
    652:  > [email protected] test /home/runner/work/composio/composio/js
    653:  > playwright test tests/*
    654:  Running 3 tests using 1 worker
    655:  Running example: e2e
    656:  /home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:264
    657:  throw new ApiError_1.ApiError(options, result, error);
    658:  ^
    659:  ApiError: Bad Request
    660:  at catchErrorCodes (/home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:264:15)
    661:  at /home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:309:45
    662:  �[90m    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)�[39m {
    663:  url: �[32m'***/v1/connectedAccounts'�[39m,
    664:  status: �[33m400�[39m,
    665:  statusText: �[32m'Bad Request'�[39m,
    666:  body: {
    667:  message: �[32m'Validation error. Please check your input.'�[39m,
    668:  errors: [
    ...
    
    674:  property: �[32m'data'�[39m,
    675:  children: [],
    676:  constraints: {
    677:  isObject: �[32m'data must be an object'�[39m,
    678:  isNotEmpty: �[32m'data should not be empty'�[39m
    679:  }
    680:  }
    681:  ],
    682:  stack: �[32m'Error: \n'�[39m +
    683:  �[32m'    at new HttpError (/app/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/src/http-error/HttpError.ts:16:18)\n'�[39m +
    684:  �[32m'    at new BadRequestError (/app/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/src/http-error/BadRequestError.ts:10:5)\n'�[39m +
    ...
    
    690:  method: �[32m'POST'�[39m,
    691:  url: �[32m'/v1/connectedAccounts'�[39m,
    692:  body: {
    693:  integrationId: �[32m'3011084c-0c3e-4787-9949-8179675c1c5b'�[39m,
    694:  userUuid: �[32m'default'�[39m,
    695:  redirectUri: �[90mundefined�[39m
    696:  },
    697:  mediaType: �[32m'application/json'�[39m,
    698:  errors: { �[32m'404'�[39m: �[32m'{\n    "message": "Connector not found"\n}'�[39m }
    699:  }
    700:  }
    701:  Node.js v20.15.0
    702:  exec error: Error: Command failed: pnpm build && cd /home/runner/work/composio/composio/js/examples/e2e && pnpm start
    703:  /home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:264
    704:  throw new ApiError_1.ApiError(options, result, error);
    705:  ^
    706:  ApiError: Bad Request
    707:  at catchErrorCodes (/home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:264:15)
    708:  at /home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:309:45
    709:  �[90m    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)�[39m {
    710:  url: �[32m'***/v1/connectedAccounts'�[39m,
    711:  status: �[33m400�[39m,
    712:  statusText: �[32m'Bad Request'�[39m,
    713:  body: {
    714:  message: �[32m'Validation error. Please check your input.'�[39m,
    715:  errors: [
    ...
    
    721:  property: �[32m'data'�[39m,
    722:  children: [],
    723:  constraints: {
    724:  isObject: �[32m'data must be an object'�[39m,
    725:  isNotEmpty: �[32m'data should not be empty'�[39m
    726:  }
    727:  }
    728:  ],
    729:  stack: �[32m'Error: \n'�[39m +
    730:  �[32m'    at new HttpError (/app/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/src/http-error/HttpError.ts:16:18)\n'�[39m +
    731:  �[32m'    at new BadRequestError (/app/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/src/http-error/BadRequestError.ts:10:5)\n'�[39m +
    ...
    
    737:  method: �[32m'POST'�[39m,
    738:  url: �[32m'/v1/connectedAccounts'�[39m,
    739:  body: {
    740:  integrationId: �[32m'3011084c-0c3e-4787-9949-8179675c1c5b'�[39m,
    741:  userUuid: �[32m'default'�[39m,
    742:  redirectUri: �[90mundefined�[39m
    743:  },
    744:  mediaType: �[32m'application/json'�[39m,
    745:  errors: { �[32m'404'�[39m: �[32m'{\n    "message": "Connector not found"\n}'�[39m }
    746:  }
    747:  }
    748:  Node.js v20.15.0
    749:  FRunning example: langchain
    750:  BadRequestError: 400 Invalid 'functions': empty array. Expected an array with minimum length 1, but got an empty array instead.
    751:  at APIError.generate (file:///home/runner/work/composio/composio/js/node_modules/�[4m.pnpm�[24m/[email protected]/node_modules/�[4mopenai�[24m/error.mjs:41:20)
    752:  at OpenAI.makeStatusError (file:///home/runner/work/composio/composio/js/node_modules/�[4m.pnpm�[24m/[email protected]/node_modules/�[4mopenai�[24m/core.mjs:268:25)
    ...
    
    773:  �[32m'x-ratelimit-limit-tokens'�[39m: �[32m'40000'�[39m,
    774:  �[32m'x-ratelimit-remaining-requests'�[39m: �[32m'4999'�[39m,
    775:  �[32m'x-ratelimit-remaining-tokens'�[39m: �[32m'39925'�[39m,
    776:  �[32m'x-ratelimit-reset-requests'�[39m: �[32m'12ms'�[39m,
    777:  �[32m'x-ratelimit-reset-tokens'�[39m: �[32m'112ms'�[39m,
    778:  �[32m'x-request-id'�[39m: �[32m'req_d4f4aec4d359971230a2b70aa337303d'�[39m
    779:  },
    780:  request_id: �[32m'req_d4f4aec4d359971230a2b70aa337303d'�[39m,
    781:  error: {
    782:  message: �[32m"Invalid 'functions': empty array. Expected an array with minimum length 1, but got an empty array instead."�[39m,
    783:  type: �[32m'invalid_request_error'�[39m,
    784:  param: �[32m'functions'�[39m,
    785:  code: �[32m'empty_array'�[39m
    786:  },
    787:  code: �[32m'empty_array'�[39m,
    788:  param: �[32m'functions'�[39m,
    789:  type: �[32m'invalid_request_error'�[39m,
    790:  attemptNumber: �[33m1�[39m,
    791:  retriesLeft: �[33m6�[39m
    792:  }
    793:  stderr: undefined
    794:  stdout: undefined
    795:  exec error: Error: �[2mexpect(�[22m�[31mreceived�[39m�[2m).�[22mtoContain�[2m(�[22m�[32mexpected�[39m�[2m) // indexOf�[22m
    796:  �[1mMatcher error�[22m: �[31mreceived�[39m value must not be null nor undefined
    797:  Received has value: �[31mundefined�[39m
    798:  FRunning example: openai
    799:  /home/runner/work/composio/composio/js/lib/sdk/index.js:105
    800:  if (app.yaml.no_auth) {
    801:  ^
    802:  TypeError: Cannot read properties of undefined (reading 'no_auth')
    803:  at Entity.execute (/home/runner/work/composio/composio/js/lib/sdk/index.js:105:22)
    804:  �[90m    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)�[39m
    805:  at async OpenAIToolSet.execute_tool_call (/home/runner/work/composio/composio/js/lib/frameworks/openai.js:55:31)
    806:  at async OpenAIToolSet.handle_tool_call (/home/runner/work/composio/composio/js/lib/frameworks/openai.js:61:30)
    807:  at async executeAgent �[90m(file:///home/runner/work/composio/composio/js/examples/openai/�[39mdemo.mjs:41:5�[90m)�[39m
    808:  Node.js v20.15.0
    809:  exec error: Error: Command failed: pnpm build && cd /home/runner/work/composio/composio/js/examples/openai && pnpm start
    810:  /home/runner/work/composio/composio/js/lib/sdk/index.js:105
    811:  if (app.yaml.no_auth) {
    812:  ^
    813:  TypeError: Cannot read properties of undefined (reading 'no_auth')
    814:  at Entity.execute (/home/runner/work/composio/composio/js/lib/sdk/index.js:105:22)
    815:  �[90m    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)�[39m
    816:  at async OpenAIToolSet.execute_tool_call (/home/runner/work/composio/composio/js/lib/frameworks/openai.js:55:31)
    817:  at async OpenAIToolSet.handle_tool_call (/home/runner/work/composio/composio/js/lib/frameworks/openai.js:61:30)
    818:  at async executeAgent �[90m(file:///home/runner/work/composio/composio/js/examples/openai/�[39mdemo.mjs:41:5�[90m)�[39m
    819:  Node.js v20.15.0
    820:  F
    821:  1) tests/run-tests.spec.ts:13:9 › e2e ────────────────────────────────────────────────────────────
    822:  Error: Command failed: pnpm build && cd /home/runner/work/composio/composio/js/examples/e2e && pnpm start
    823:  /home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:264
    824:  throw new ApiError_1.ApiError(options, result, error);
    825:  ^
    826:  ApiError: Bad Request
    827:  at lib/sdk/client/core/request.js:264
    828:  262 |     const error = errors[result.status];
    829:  263 |     if (error) {
    830:  > 264 |         throw new ApiError_1.ApiError(options, result, error);
    831:  |               ^
    832:  265 |     }
    833:  266 |     if (!result.ok) {
    834:  267 |         const errorStatus = (_a = result.status) !== null && _a !== void 0 ? _a : 'unknown';
    835:  at catchErrorCodes (/home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:264:15)
    836:  at /home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:309:45
    837:  �[90m    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)�[39m {
    838:  url: �[32m'***/v1/connectedAccounts'�[39m,
    839:  status: �[33m400�[39m,
    840:  statusText: �[32m'Bad Request'�[39m,
    841:  body: {
    842:  message: �[32m'Validation error. Please check your input.'�[39m,
    843:  errors: [
    ...
    
    849:  property: �[32m'data'�[39m,
    850:  children: [],
    851:  constraints: {
    852:  isObject: �[32m'data must be an object'�[39m,
    853:  isNotEmpty: �[32m'data should not be empty'�[39m
    854:  }
    855:  }
    856:  ],
    857:  stack: �[32m'Error: \n'�[39m +
    858:  �[32m'    at new HttpError (/app/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/src/http-error/HttpError.ts:16:18)\n'�[39m +
    859:  �[32m'    at new BadRequestError (/app/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/src/http-error/BadRequestError.ts:10:5)\n'�[39m +
    ...
    
    865:  method: �[32m'POST'�[39m,
    866:  url: �[32m'/v1/connectedAccounts'�[39m,
    867:  body: {
    868:  integrationId: �[32m'3011084c-0c3e-4787-9949-8179675c1c5b'�[39m,
    869:  userUuid: �[32m'default'�[39m,
    870:  redirectUri: �[90mundefined�[39m
    871:  },
    872:  mediaType: �[32m'application/json'�[39m,
    873:  errors: { �[32m'404'�[39m: �[32m'{\n    "message": "Connector not found"\n}'�[39m }
    874:  }
    875:  }
    876:  Node.js v20.15.0
    877:  at catchErrorCodes (/home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:264:15)
    878:  at /home/runner/work/composio/composio/js/lib/sdk/client/core/request.js:309:45
    879:  at /home/runner/work/composio/composio/js/tests/run-tests.spec.ts:18:46
    880:  at /home/runner/work/composio/composio/js/tests/run-tests.spec.ts:14:13
    881:  2) tests/run-tests.spec.ts:13:9 › langchain ──────────────────────────────────────────────────────
    882:  Error: �[2mexpect(�[22m�[31mreceived�[39m�[2m).�[22mtoContain�[2m(�[22m�[32mexpected�[39m�[2m) // indexOf�[22m
    883:  �[1mMatcher error�[22m: �[31mreceived�[39m value must not be null nor undefined
    884:  Received has value: �[31mundefined�[39m
    885:  21 |           
    886:  22 |           // Assert some stuff on stdout for test checks
    887:  > 23 |           expect(stdout).toContain('Expected output');
    888:  |                          ^
    889:  24 |           expect(stderr).toBe('');
    890:  25 |           resolve();
    891:  26 |         } catch (error) {
    892:  at /home/runner/work/composio/composio/js/tests/run-tests.spec.ts:23:26
    893:  at /home/runner/work/composio/composio/js/tests/run-tests.spec.ts:14:13
    894:  3) tests/run-tests.spec.ts:13:9 › openai ─────────────────────────────────────────────────────────
    895:  Error: Command failed: pnpm build && cd /home/runner/work/composio/composio/js/examples/openai && pnpm start
    896:  /home/runner/work/composio/composio/js/lib/sdk/index.js:105
    897:  if (app.yaml.no_auth) {
    898:  ^
    899:  TypeError: Cannot read properties of undefined (reading 'no_auth')
    ...
    
    911:  at async OpenAIToolSet.handle_tool_call (/home/runner/work/composio/composio/js/lib/frameworks/openai.js:61:30)
    912:  at async executeAgent �[90m(file:///home/runner/work/composio/composio/js/examples/openai/�[39mdemo.mjs:41:5�[90m)�[39m
    913:  Node.js v20.15.0
    914:  at Entity.execute (/home/runner/work/composio/composio/js/lib/sdk/index.js:105:22)
    915:  at async OpenAIToolSet.execute_tool_call (/home/runner/work/composio/composio/js/lib/frameworks/openai.js:55:31)
    916:  at async OpenAIToolSet.handle_tool_call (/home/runner/work/composio/composio/js/lib/frameworks/openai.js:61:30)
    917:  at /home/runner/work/composio/composio/js/tests/run-tests.spec.ts:18:46
    918:  at /home/runner/work/composio/composio/js/tests/run-tests.spec.ts:14:13
    919:  3 failed
    920:  tests/run-tests.spec.ts:13:9 › e2e ─────────────────────────────────────────────────────────────
    921:  tests/run-tests.spec.ts:13:9 › langchain ───────────────────────────────────────────────────────
    922:  tests/run-tests.spec.ts:13:9 › openai ──────────────────────────────────────────────────────────
    923:  ELIFECYCLE  Test failed. See above for more details.
    924:  ##[error]Process completed with exit code 1.
    

    ✨ CI feedback usage guide:

    The CI feedback tool (/checks) automatically triggers when a PR has a failed check.
    The tool analyzes the failed checks and provides several feedbacks:

    • Failed stage
    • Failed test name
    • Failure summary
    • Relevant error logs

    In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR:

    /checks "https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}"
    

    where {repo_name} is the name of the repository, {run_number} is the run number of the failed check, and {job_number} is the job number of the failed check.

    Configuration options

    • enable_auto_checks_feedback - if set to true, the tool will automatically provide feedback when a check is failed. Default is true.
    • excluded_checks_list - a list of checks to exclude from the feedback, for example: ["check1", "check2"]. Default is an empty list.
    • enable_help_text - if set to true, the tool will provide a help message with the feedback. Default is true.
    • persistent_comment - if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true.
    • final_update_message - if persistent_comment is true and updating a previous checks message, the tool will also create a new message: "Persistent checks updated to latest commit". Default is true.

    See more information about the checks tool in the docs.

    kaavee315 and others added 7 commits July 1, 2024 18:15
    ### **PR Type**
    Enhancement, Bug fix
    
    
    ___
    Refactor workspace
    
    ### **Description**
    - Added `execute` command to `actions` group for executing actions with
    parameters.
    - Replaced `_get_enum_key` with `get_enum_key` from
    `composio.utils.enums` across multiple files.
    - Added `ExecutionEnvironment` and `Env` classes to handle different
    execution environments.
    - Modified `LocalToolHandler` to initialize and execute actions based on
    the execution environment.
    - Simplified tools initialization by removing old toolset code and
    importing `ComposioToolSet`.
    - Added `execute_action` method to `DockerWorkspace` and abstract method
    in `base_workspace`.
    - Modified `create_workspace` in `workspace_factory` to return
    `Workspace` object instead of ID.
    - building docker images from the 
     -- git clone required version
     -- pip install requirements 
     -- pip install composio core - to run composio tools
    
    
    docker images are public and hosted on `techcomposio` namespace
    
    ---------
    
    Co-authored-by: Karan Vaidya <[email protected]>
    Co-authored-by: angrybayblade <[email protected]>
    Co-authored-by: Viraj <[email protected]>
    @utkarsh-dixit
    Copy link
    Collaborator Author

    Pull Request Summary

    Changes and Objectives

    This PR includes several updates aimed at enhancing the JavaScript examples, testing workflow, and incorporating the COMPOSIO_API_KEY and COMPOSIO_BASE_URL in the project. Below are the key changes:

    1. Added Example Tests for JavaScript and Updated Existing Examples:

      • Created new tests and sample workflows in JavaScript.
      • Modified numerous example files to update their behavior and configurations.
    2. Updated GitHub Actions Workflow:

      • Added a JavaScript test workflow in the .github/workflows/common.yml for running JavaScript tests automatically.
    3. Modified SDK Configuration:

      • Updated the OpenAPI configuration and the Composio class to support COMPOSIO_API_KEY and COMPOSIO_BASE_URL environment variables.
    4. Included Playwright for Testing:

      • Added Playwright dependencies and configuration for end-to-end testing.
      • Created new test files with Playwright to run and validate examples and scripts.
    5. General Code Maintenance:

      • Removed unnecessary dependencies and code sections for better clarity and performance.

    Categorization

    • Type: Feature

    Important Change Files

    This PR mainly affects multiple areas including JavaScript examples, workflows, and test configurations. The important files are:

    • Workflow Files:
      • .github/workflows/common.yml
    • JavaScript Example Files:
      • js/examples/e2e/demo.mjs
      • js/examples/langchain/demo.mjs
      • js/examples/openai/demo.mjs
      • js/package.json
    • Configuration and Testing:
      • js/playwright.config.ts
      • js/tests/run-tests.spec.ts
    • SDK Files:
      • js/src/sdk/client/core/OpenAPI.ts
      • js/src/sdk/index.ts

    @kaavee315 kaavee315 closed this Aug 2, 2024
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    7 participants