Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

check ci #670

Closed
wants to merge 1 commit into from
Closed

check ci #670

wants to merge 1 commit into from

Conversation

mrT23
Copy link
Collaborator

@mrT23 mrT23 commented Feb 18, 2024

Type

bug_fix


Description

  • Fixed an incorrect assertion in test_no_ignores within test_file_filter.py. The test now expects None instead of all files when no ignore patterns are given, aligning the test expectation with the updated logic.

Changes walkthrough

Relevant files
Bug fix
test_file_filter.py
Fix Expected Result in Test for File Filtering Without Ignore Patterns

tests/unittest/test_file_filter.py

  • Modified the expected result in test_no_ignores to None from files.
  • +1/-1     

    PR-Agent usage:
    Comment /help on the PR to get a list of all available PR-Agent tools and their descriptions

    @mrT23 mrT23 marked this pull request as ready for review February 18, 2024 07:40
    Copy link
    Contributor

    PR Description updated to latest commit (1bd3415)

    Copy link
    Contributor

    qodo-merge-pro bot commented Feb 18, 2024

    PR Review

    (Review updated until commit 1bd3415)

         PR feedback                    
    ⏱️ Estimated effort to review [1-5]

    1, because the PR involves a simple change in a unit test's expected outcome. The change is straightforward and impacts a single assertion in one test case.

    🧪 Relevant tests

    Yes

    🔍 Possible issues
    • The assertion message in test_no_ignores is misleading after the change. It still says "Expected all files to be returned when no ignore patterns are given." but the expected result is now None.
    🔒 Security concerns

    No

    Code feedback:
    relevant filetests/unittest/test_file_filter.py
    suggestion      

    Update the assertion message in test_no_ignores to accurately reflect the new expected outcome (None) when no ignore patterns are given. This will avoid confusion for anyone reading the test in the future. [important]

    relevant lineassert filter_ignored(files) == None, "Expected all files to be returned when no ignore patterns are given."


    ✨ Review tool usage guide:

    Overview:
    The review tool scans the PR code changes, and generates a PR review. The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on any PR.
    When commenting, to edit configurations related to the review tool (pr_reviewer section), use the following template:

    /review --pr_reviewer.some_config1=... --pr_reviewer.some_config2=...
    

    With a configuration file, use the following template:

    [pr_reviewer]
    some_config1=...
    some_config2=...
    
    Utilizing extra instructions

    The review tool can be configured with extra instructions, which can be used to guide the model to a feedback tailored to the needs of your project.

    Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify the relevant sub-tool, and the relevant aspects of the PR that you want to emphasize.

    Examples for extra instructions:

    [pr_reviewer] # /review #
    extra_instructions="""
    In the 'possible issues' section, emphasize the following:
    - Does the code logic cover relevant edge cases?
    - Is the code logic clear and easy to understand?
    - Is the code logic efficient?
    ...
    """
    

    Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.

    How to enable\disable automation
    • When you first install PR-Agent app, the default mode for the review tool is:
    pr_commands = ["/review", ...]
    

    meaning the review tool will run automatically on every PR, with the default configuration.
    Edit this field to enable/disable the tool, or to change the used configurations

    Auto-labels

    The review tool can auto-generate two specific types of labels for a PR:

    • a possible security issue label, that detects possible security issues (enable_review_labels_security flag)
    • a Review effort [1-5]: x label, where x is the estimated effort to review the PR (enable_review_labels_effort flag)
    Extra sub-tools

    The review tool provides a collection of possible feedbacks about a PR.
    It is recommended to review the possible options, and choose the ones relevant for your use case.
    Some of the feature that are disabled by default are quite useful, and should be considered for enabling. For example:
    require_score_review, require_soc2_ticket, and more.

    Auto-approve PRs

    By invoking:

    /review auto_approve
    

    The tool will automatically approve the PR, and add a comment with the approval.

    To ensure safety, the auto-approval feature is disabled by default. To enable auto-approval, you need to actively set in a pre-defined configuration file the following:

    [pr_reviewer]
    enable_auto_approval = true
    

    (this specific flag cannot be set with a command line argument, only in the configuration file, committed to the repository)

    You can also enable auto-approval only if the PR meets certain requirements, such as that the estimated_review_effort is equal or below a certain threshold, by adjusting the flag:

    [pr_reviewer]
    maximal_review_effort = 5
    
    More PR-Agent commands

    To invoke the PR-Agent, add a comment using one of the following commands:

    • /review: Request a review of your Pull Request.
    • /describe: Update the PR title and description based on the contents of the PR.
    • /improve [--extended]: Suggest code improvements. Extended mode provides a higher quality feedback.
    • /ask <QUESTION>: Ask a question about the PR.
    • /update_changelog: Update the changelog based on the PR's contents.
    • /add_docs 💎: Generate docstring for new components introduced in the PR.
    • /generate_labels 💎: Generate labels for the PR based on the PR's contents.
    • /analyze 💎: Automatically analyzes the PR, and presents changes walkthrough for each component.

    See the tools guide for more details.
    To list the possible configuration parameters, add a /config comment.

    See the review usage page for a comprehensive guide on using this tool.

    Copy link
    Contributor

    qodo-merge-pro bot commented Feb 18, 2024

    PR Code Suggestions

    Suggestions                                                                                                                                                     
    best practice
    Use is None for comparison instead of ==.          

    The assertion filter_ignored(files) == None should use is None for comparison instead of
    ==. This is because None is a singleton in Python, and identity checks are more
    appropriate and semantically correct when comparing singletons.

    tests/unittest/test_file_filter.py [17]

    -assert filter_ignored(files) == None, "Expected all files to be returned when no ignore patterns are given."
    +assert filter_ignored(files) is None, "Expected all files to be returned when no ignore patterns are given."
     

    ✨ Usage guide:

    Overview:
    The improve tool scans the PR code changes, and automatically generates suggestions for improving the PR code. The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on a PR.
    When commenting, to edit configurations related to the improve tool (pr_code_suggestions section), use the following template:

    /improve --pr_code_suggestions.some_config1=... --pr_code_suggestions.some_config2=...
    

    With a configuration file, use the following template:

    [pr_code_suggestions]
    some_config1=...
    some_config2=...
    
    Enabling\disabling automation

    When you first install the app, the default mode for the improve tool is:

    pr_commands = ["/improve --pr_code_suggestions.summarize=true", ...]
    

    meaning the improve tool will run automatically on every PR, with summarization enabled. Delete this line to disable the tool from running automatically.

    Utilizing extra instructions

    Extra instructions are very important for the improve tool, since they enable to guide the model to suggestions that are more relevant to the specific needs of the project.

    Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify relevant aspects that you want the model to focus on.

    Examples for extra instructions:

    [pr_code_suggestions] # /improve #
    extra_instructions="""
    Emphasize the following aspects:
    - Does the code logic cover relevant edge cases?
    - Is the code logic clear and easy to understand?
    - Is the code logic efficient?
    ...
    """
    

    Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.

    A note on code suggestions quality
    • While the current AI for code is getting better and better (GPT-4), it's not flawless. Not all the suggestions will be perfect, and a user should not accept all of them automatically.
    • Suggestions are not meant to be simplistic. Instead, they aim to give deep feedback and raise questions, ideas and thoughts to the user, who can then use his judgment, experience, and understanding of the code base.
    • Recommended to use the 'extra_instructions' field to guide the model to suggestions that are more relevant to the specific needs of the project, or use the custom suggestions 💎 tool
    • With large PRs, best quality will be obtained by using 'improve --extended' mode.
    More PR-Agent commands

    To invoke the PR-Agent, add a comment using one of the following commands:

    • /review: Request a review of your Pull Request.
    • /describe: Update the PR title and description based on the contents of the PR.
    • /improve [--extended]: Suggest code improvements. Extended mode provides a higher quality feedback.
    • /ask <QUESTION>: Ask a question about the PR.
    • /update_changelog: Update the changelog based on the PR's contents.
    • /add_docs 💎: Generate docstring for new components introduced in the PR.
    • /generate_labels 💎: Generate labels for the PR based on the PR's contents.
    • /analyze 💎: Automatically analyzes the PR, and presents changes walkthrough for each component.

    See the tools guide for more details.
    To list the possible configuration parameters, add a /config comment.

    See the improve usage page for a more comprehensive guide on using this tool.

    Copy link
    Contributor

    qodo-merge-pro bot commented Feb 18, 2024

    PR Checks Failure Feedback

    (Checks updated until commit 1bd3415)

    Action: build-and-test

    Failed stage: Test dev docker [❌]

    Failed test name: TestIgnoreFilter::test_no_ignores

    Failure summary:

    The action failed because the test test_no_ignores in the file tests/unittest/test_file_filter.py
    failed. The failure occurred because the function filter_ignored did not return None as expected
    when no ignore patterns were given. Instead, it returned a list of file objects, indicating that the
    function's behavior did not match the expected outcome of handling files without any ignore
    patterns.

    Relevant error logs:
    1:  Job defined at: Codium-ai/pr-agent/.github/workflows/build-and-test.yaml@refs/heads/mrT23-patch-2
    2:  Waiting for a runner to pick up this job...
    ...
    
    971:  tests/unittest/test_delete_hunks.py::TestOmitDeletionHunks::test_patch_one_hunk PASSED [ 39%]
    972:  tests/unittest/test_delete_hunks.py::TestOmitDeletionHunks::test_patch_deletions_no_additions PASSED [ 40%]
    973:  tests/unittest/test_extend_patch.py::TestExtendPatch::test_happy_path PASSED [ 42%]
    974:  tests/unittest/test_extend_patch.py::TestExtendPatch::test_empty_patch PASSED [ 43%]
    975:  tests/unittest/test_extend_patch.py::TestExtendPatch::test_zero_num_lines PASSED [ 45%]
    976:  tests/unittest/test_extend_patch.py::TestExtendPatch::test_no_hunks PASSED [ 46%]
    977:  tests/unittest/test_extend_patch.py::TestExtendPatch::test_single_hunk PASSED [ 47%]
    978:  tests/unittest/test_extend_patch.py::TestExtendPatch::test_multiple_hunks PASSED [ 49%]
    979:  tests/unittest/test_file_filter.py::TestIgnoreFilter::test_no_ignores FAILED [ 50%]
    ...
    
    1007:  tests/unittest/test_parse_code_suggestion.py::TestParseCodeSuggestion::test_non_string_before_or_after PASSED [ 90%]
    1008:  tests/unittest/test_parse_code_suggestion.py::TestParseCodeSuggestion::test_no_code_example_key PASSED [ 91%]
    1009:  tests/unittest/test_parse_code_suggestion.py::TestParseCodeSuggestion::test_with_code_example_key PASSED [ 92%]
    1010:  tests/unittest/test_try_fix_yaml.py::TestTryFixYaml::test_valid_yaml PASSED [ 94%]
    1011:  tests/unittest/test_try_fix_yaml.py::TestTryFixYaml::test_add_relevant_line PASSED [ 95%]
    1012:  tests/unittest/test_try_fix_yaml.py::TestTryFixYaml::test_extract_snippet PASSED [ 97%]
    1013:  tests/unittest/test_try_fix_yaml.py::TestTryFixYaml::test_remove_last_line PASSED [ 98%]
    1014:  tests/unittest/test_try_fix_yaml.py::TestTryFixYaml::test_empty_yaml_fixed PASSED [100%]
    1015:  =================================== FAILURES ===================================
    ...
    
    1022:  files = [
    1023:  type('', (object,), {'filename': 'file1.py'})(),
    1024:  type('', (object,), {'filename': 'file2.java'})(),
    1025:  type('', (object,), {'filename': 'file3.cpp'})(),
    1026:  type('', (object,), {'filename': 'file4.py'})(),
    1027:  type('', (object,), {'filename': 'file5.py'})()
    1028:  ]
    1029:  >       assert filter_ignored(files) == None, "Expected all files to be returned when no ignore patterns are given."
    1030:  E       AssertionError: Expected all files to be returned when no ignore patterns are given.
    1031:  E       assert [<test_file_filter. object at 0x7f5b5a1570d0>, <test_file_filter. object at 0x7f5b5a156920>, <test_file_filter. object at 0x7f5b5a156680>, <test_file_filter. object at 0x7f5b5a156860>, <test_file_filter. object at 0x7f5b5a1568f0>] == None
    1032:  E        +  where [<test_file_filter. object at 0x7f5b5a1570d0>, <test_file_filter. object at 0x7f5b5a156920>, <test_file_filter. object at 0x7f5b5a156680>, <test_file_filter. object at 0x7f5b5a156860>, <test_file_filter. object at 0x7f5b5a1568f0>] = filter_ignored([<test_file_filter. object at 0x7f5b5a1570d0>, <test_file_filter. object at 0x7f5b5a156920>, <test_file_filter. object at 0x7f5b5a156680>, <test_file_filter. object at 0x7f5b5a156860>, <test_file_filter. object at 0x7f5b5a1568f0>])
    1033:  tests/unittest/test_file_filter.py:17: AssertionError
    ...
    
    1035:  tests/unittest/test_file_filter.py:44
    1036:  /app/tests/unittest/test_file_filter.py:44: DeprecationWarning: invalid escape sequence '\.'
    1037:  monkeypatch.setattr(global_settings.ignore, 'regex', ['^file[2-4]\..*$'])
    1038:  tests/unittest/test_file_filter.py:65
    1039:  /app/tests/unittest/test_file_filter.py:65: DeprecationWarning: invalid escape sequence '\.'
    1040:  monkeypatch.setattr(global_settings.ignore, 'regex', ['(((||', '^file[2-4]\..*$'])
    1041:  -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
    1042:  =========================== short test summary info ============================
    1043:  FAILED tests/unittest/test_file_filter.py::TestIgnoreFilter::test_no_ignores
    1044:  =================== 1 failed, 70 passed, 2 warnings in 2.34s ===================
    1045:  ##[error]Process completed with exit code 1.
    

    Copy link
    Contributor

    Persistent checks updated to latest commit 1bd3415

    @mrT23
    Copy link
    Collaborator Author

    mrT23 commented Feb 19, 2024

    /help

    Copy link
    Contributor

    PR Agent Walkthrough

    🤖 Welcome to the PR Agent, an AI-powered tool for automated pull request analysis, feedback, suggestions and more.

    Here is a list of tools you can use to interact with the PR Agent:

    ToolDescription

    DESCRIBE

    Generates PR description - title, type, summary, code walkthrough and labels

    REVIEW

    Adjustable feedback about the PR, possible issues, security concerns, review effort and more

    IMPROVE

    Code suggestions for improving the PR.

    ASK

    Answering free-text questions about the PR.

    SIMILAR ISSUE

    Automatically retrieves and presents similar issues.

    UPDATE CHANGELOG

    Automatically updates the changelog.

    ADD DOCUMENTATION

    Generates documentation to methods/functions/classes that changed in the PR.

    GENERATE CUSTOM LABELS

    Generates custom labels for the PR, based on specific guidelines defined by the user

    ANALYZE

    Identifies code components that changed in the PR, and enables to interactively generate tests, docs, and code suggestions for each component.

    TEST

    Generates unit tests for a selected component, based on the PR code change.

    CI FEEDBACK

    Generates feedback and analysis for a failed CI job.

    CUSTOM SUGGESTIONS

    Generates custom suggestions for improving the PR code, based on specific guidelines defined by the user.

    Note that each tool be invoked automatically when a new PR is opened, or called manually by commenting on a PR.

    @mrT23
    Copy link
    Collaborator Author

    mrT23 commented Feb 19, 2024

    /describe

    Copy link
    Contributor

    qodo-merge-pro bot commented Feb 19, 2024

    PR Description updated to latest commit (1bd3415)

    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Changes walkthrough

    Bug fix
    test_file_filter.py (+1/-1)
    Fix Expected Result in Test for No Ignore Patterns             

    tests/unittest/test_file_filter.py

  • Modified the expected result in test_no_ignores from returning all
    files to returning None when no ignore patterns are given.
  • @CodiumAI-Agent
    Copy link

    Persistent review updated to latest commit 1bd3415

    @CodiumAI-Agent
    Copy link

    PR Analysis

    • 🎯 Main theme: Fixing a unit test in the file filter functionality
    • 📝 PR summary: This PR corrects the expected outcome in the test_no_ignores unit test. The test now expects None to be returned when no ignore patterns are specified, instead of expecting all files to be returned.
    • 📌 Type of PR: Bug fix
    • 🧪 Relevant tests added: No
    • ⏱️ Estimated effort to review [1-5]: 1, because the PR contains a single line change in a unit test.
    • 🔒 Security concerns: No security concerns found

    PR Feedback

    💡 General suggestions: The PR is straightforward and focused on a specific issue in a unit test. However, it would be beneficial to add a new test case that verifies the new expected behavior when no ignore patterns are specified.

    🤖 Code feedback:
    relevant filetests/unittest/test_file_filter.py
    suggestion      

    Consider adding a new test case to verify the new expected behavior when no ignore patterns are specified. This will ensure that the function behaves as expected in this scenario. [medium]

    relevant lineassert filter_ignored(files) == None, "Expected all files to be returned when no ignore patterns are given."


    ✨ Usage guide:

    Overview:
    The review tool scans the PR code changes, and generates a PR review. The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on any PR.
    When commenting, to edit configurations related to the review tool (pr_reviewer section), use the following template:

    /review --pr_reviewer.some_config1=... --pr_reviewer.some_config2=...
    

    With a configuration file, use the following template:

    [pr_reviewer]
    some_config1=...
    some_config2=...
    
    Utilizing extra instructions

    The review tool can be configured with extra instructions, which can be used to guide the model to a feedback tailored to the needs of your project.

    Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify the relevant sub-tool, and the relevant aspects of the PR that you want to emphasize.

    Examples for extra instructions:

    [pr_reviewer] # /review #
    extra_instructions="""
    In the code feedback section, emphasize the following:
    - Does the code logic cover relevant edge cases?
    - Is the code logic clear and easy to understand?
    - Is the code logic efficient?
    ...
    """
    

    Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.

    How to enable\disable automation
    • When you first install PR-Agent app, the default mode for the review tool is:
    pr_commands = ["/review", ...]
    

    meaning the review tool will run automatically on every PR, with the default configuration.
    Edit this field to enable/disable the tool, or to change the used configurations

    About the 'Code feedback' section

    The review tool provides several type of feedbacks, one of them is code suggestions.
    If you are interested only in the code suggestions, it is recommended to use the improve feature instead, since it dedicated only to code suggestions, and usually gives better results.
    Use the review tool if you want to get a more comprehensive feedback, which includes code suggestions as well.

    Auto-labels

    The review tool can auto-generate two specific types of labels for a PR:

    • a possible security issue label, that detects possible security issues (enable_review_labels_security flag)
    • a Review effort [1-5]: x label, where x is the estimated effort to review the PR (enable_review_labels_effort flag)
    Extra sub-tools

    The review tool provides a collection of possible feedbacks about a PR.
    It is recommended to review the possible options, and choose the ones relevant for your use case.
    Some of the feature that are disabled by default are quite useful, and should be considered for enabling. For example:
    require_score_review, require_soc2_review, enable_review_labels_effort, and more.

    More PR-Agent commands

    To invoke the PR-Agent, add a comment using one of the following commands:

    • /review: Request a review of your Pull Request.
    • /describe: Update the PR title and description based on the contents of the PR.
    • /improve [--extended]: Suggest code improvements. Extended mode provides a higher quality feedback.
    • /ask <QUESTION>: Ask a question about the PR.
    • /update_changelog: Update the changelog based on the PR's contents.
    • /add_docs 💎: Generate docstring for new components introduced in the PR.
    • /generate_labels 💎: Generate labels for the PR based on the PR's contents.
    • /analyze 💎: Automatically analyzes the PR, and presents changes walkthrough for each component.

    See the tools guide for more details.
    To list the possible configuration parameters, add a /config comment.

    See the review usage page for a comprehensive guide on using this tool.

    @CodiumAI-Agent
    Copy link

    CodiumAI-Agent commented Feb 20, 2024

    PR Code Suggestions

    Suggestions                                                                                                                                                     
    best practice
    Use is instead of == for comparison with None.

    Use assert filter_ignored(files) is None instead of == for comparing to None. This is the
    recommended way to check for None in Python as it is more readable and idiomatic.

    tests/unittest/test_file_filter.py [17]

    -assert filter_ignored(files) == None, "Expected all files to be returned when no ignore patterns are given."
    +assert filter_ignored(files) is None, "Expected all files to be returned when no ignore patterns are given."
     

    ✨ Improve tool usage guide:

    Overview:
    The improve tool scans the PR code changes, and automatically generates suggestions for improving the PR code. The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on a PR.
    When commenting, to edit configurations related to the improve tool (pr_code_suggestions section), use the following template:

    /improve --pr_code_suggestions.some_config1=... --pr_code_suggestions.some_config2=...
    

    With a configuration file, use the following template:

    [pr_code_suggestions]
    some_config1=...
    some_config2=...
    
    Enabling\disabling automation

    When you first install the app, the default mode for the improve tool is:

    pr_commands = ["/improve --pr_code_suggestions.summarize=true", ...]
    

    meaning the improve tool will run automatically on every PR, with summarization enabled. Delete this line to disable the tool from running automatically.

    Utilizing extra instructions

    Extra instructions are very important for the improve tool, since they enable to guide the model to suggestions that are more relevant to the specific needs of the project.

    Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify relevant aspects that you want the model to focus on.

    Examples for extra instructions:

    [pr_code_suggestions] # /improve #
    extra_instructions="""
    Emphasize the following aspects:
    - Does the code logic cover relevant edge cases?
    - Is the code logic clear and easy to understand?
    - Is the code logic efficient?
    ...
    """
    

    Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.

    A note on code suggestions quality
    • While the current AI for code is getting better and better (GPT-4), it's not flawless. Not all the suggestions will be perfect, and a user should not accept all of them automatically.
    • Suggestions are not meant to be simplistic. Instead, they aim to give deep feedback and raise questions, ideas and thoughts to the user, who can then use his judgment, experience, and understanding of the code base.
    • Recommended to use the 'extra_instructions' field to guide the model to suggestions that are more relevant to the specific needs of the project, or use the custom suggestions 💎 tool
    • With large PRs, best quality will be obtained by using 'improve --extended' mode.
    More PR-Agent commands

    To invoke the PR-Agent, add a comment using one of the following commands:

    • /review: Request a review of your Pull Request.
    • /describe: Update the PR title and description based on the contents of the PR.
    • /improve [--extended]: Suggest code improvements. Extended mode provides a higher quality feedback.
    • /ask <QUESTION>: Ask a question about the PR.
    • /update_changelog: Update the changelog based on the PR's contents.
    • /add_docs 💎: Generate docstring for new components introduced in the PR.
    • /generate_labels 💎: Generate labels for the PR based on the PR's contents.
    • /analyze 💎: Automatically analyzes the PR, and presents changes walkthrough for each component.

    See the tools guide for more details.
    To list the possible configuration parameters, add a /config comment.

    See the improve usage page for a more comprehensive guide on using this tool.

    @CodiumAI-Agent
    Copy link

    PR Code Suggestions

    💡 [best practice]

    Use is to compare with None instead of ==. In Python, is checks for identity while == checks for equality. When comparing with None, it's recommended to use is or is not.

    File: tests/unittest/test_file_filter.py (17-17)

    Example code:

    Existing code:

    assert filter_ignored(files) == None, "Expected all files to be returned when no ignore patterns are given."

    Improved code:

    assert filter_ignored(files) is None, "Expected all files to be returned when no ignore patterns are given."

    Copy link
    Contributor

    PR Description updated to latest commit (1bd3415)

    @mrT23 mrT23 closed this Feb 24, 2024
    @mrT23 mrT23 deleted the mrT23-patch-2 branch April 3, 2024 05:21
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    2 participants