Skip to content

Conversation

@menshibin
Copy link
Contributor

Description

Please briefly describe the code changes in this pull request.

Checklist

Please check the items in the checklist if applicable.

  • Is the user manual updated?
  • Are the test cases passed and automated?
  • Is there no significant decrease in test coverage?

@menshibin menshibin requested a review from a team as a code owner October 28, 2025 03:48
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @menshibin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on improving the reliability and robustness of a specific benchmark test case. The core change involves migrating the external command execution from the less controlled os.system to the more modern and feature-rich subprocess.run module. This transition enables better process management, including output capture, timeout enforcement, and comprehensive error handling. Furthermore, the test is now executed repeatedly to enhance its ability to detect transient failures or performance regressions.

Highlights

  • Benchmark Execution Refactor: The method for executing the taosBenchmark command within the test case has been updated from os.system to subprocess.run.
  • Enhanced Error Handling: Robust error handling has been implemented for benchmark execution, including capturing output, setting a timeout of 300 seconds, and catching specific subprocess exceptions like CalledProcessError and TimeoutExpired.
  • Increased Test Repetition: The benchmark test now runs 99 times in a loop, significantly increasing the test coverage and potential for identifying intermittent issues.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the benchmark test to use subprocess.run instead of os.system, which is a good improvement for security and process control. It also adds a loop to run the test multiple times. My review includes suggestions to improve the test's effectiveness by making each loop iteration unique, to enhance code consistency by translating Chinese comments and log messages to English, and to reduce log noise.

Comment on lines +111 to +112
self.executeAndCheck('stmt2_tag_order_1', 'stmt2', 1)
self.executeAndCheck('stmt2_tag_order_2', 'stmt2', 0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

These calls use hardcoded database names inside a loop. This means the exact same test is run 99 times, which is redundant. To make this stability test more effective, use the loop variable i to create unique database names for each iteration. This will better test for state leakage between runs.

Suggested change
self.executeAndCheck('stmt2_tag_order_1', 'stmt2', 1)
self.executeAndCheck('stmt2_tag_order_2', 'stmt2', 0)
self.executeAndCheck(f'stmt2_tag_order_1_{i}', 'stmt2', 1)
self.executeAndCheck(f'stmt2_tag_order_2_{i}', 'stmt2', 0)

Comment on lines 69 to 89
try:
result = subprocess.run(
cmd,
capture_output=True, # 捕获输出
text=True, # 文本模式
timeout=300, # 设置超时
check=True # 非零返回码时抛异常
)
tdLog.info(f"Benchmark执行成功: {result.stdout}")
self.checkDataCorrect(dbname, 1000, 100)
tdSql.execute(f"drop database {dbname};")
except subprocess.CalledProcessError as e:
tdLog.error(f"Benchmark执行失败,返回码: {e.returncode}")
tdLog.error(f"错误输出: {e.stderr}")
raise
except subprocess.TimeoutExpired:
tdLog.error("Benchmark执行超时")
raise
except Exception as e:
tdLog.error(f"执行过程中发生异常: {str(e)}")
raise
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The inline comments and log messages in this try...except block are in Chinese, while the rest of the codebase is in English. To maintain consistency and improve readability for all contributors, please translate these to English.

        try:
            result = subprocess.run(
                cmd,
                capture_output=True,    # Capture output
                text=True,              # Text mode
                timeout=300,            # Set timeout
                check=True              # Raise exception on non-zero return code
            )
            tdLog.info(f"Benchmark executed successfully: {result.stdout}")
            self.checkDataCorrect(dbname, 1000, 100)
            tdSql.execute(f"drop database {dbname};")
        except subprocess.CalledProcessError as e:
            tdLog.error(f"Benchmark execution failed with return code: {e.returncode}")
            tdLog.error(f"Error output: {e.stderr}")
            raise
        except subprocess.TimeoutExpired:
            tdLog.error("Benchmark execution timed out")
            raise
        except Exception as e:
            tdLog.error(f"An exception occurred during execution: {str(e)}")
            raise

for i in range(1, 100):
self.executeAndCheck('stmt2_tag_order_1', 'stmt2', 1)
self.executeAndCheck('stmt2_tag_order_2', 'stmt2', 0)
tdLog.success("Successfully executed")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Logging a success message on every iteration creates a lot of log noise. It would be better to move this log message outside the loop to be printed only once after all iterations complete successfully. This makes the test logs cleaner and easier to read.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants