-
Notifications
You must be signed in to change notification settings - Fork 108
refactor(pingcap/tidb): refactor pipeline paths and post steps for ghpr_check2 job
#3885
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
refactor(pingcap/tidb): refactor pipeline paths and post steps for ghpr_check2 job
#3885
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have already done a preliminary review for you, and I hope to help you do a better job.
Summary
This PR refactors the ghpr_check2 Jenkins pipeline job and related scripts for the pingcap/tidb repo. It centralizes path definitions using variables like fullRepo, branchAlias, and jobName, renames pipeline and pod files to standard names (pipeline.groovy and pod.yaml), and replaces hardcoded workspace directory names with REFS.repo. It also adds conditional parsing of Bazel test results, sends cloudevents on test completion, archives Bazel logs, relocates environment variable declarations, removes a debug stage, and re-enables parallelsAlwaysFailFast in a next-gen pipeline. The changes improve maintainability and pipeline flexibility. Overall, the refactor is clean and well-structured, but some improvements can be made on error handling, naming clarity, and minor organization.
Critical Issues
-
Potential silent failures in shell scripts (pipelines/pingcap/tidb/latest/ghpr_check2/pipeline.groovy, lines ~130-160):
The Bazel test log parsing and cloudevents sending scripts use|| trueto ignore errors, which might hide failures that should be investigated.
Suggestion: Add logging or notifications when these commands fail, e.g.:sh label: "Parse flaky test case results", script: './scripts/plugins/analyze-go-test-from-bazel-output.sh tidb/bazel-test.log || echo "Warning: Bazel flaky test parsing failed"'
Or capture the exit code and log it for transparency.
-
Hardcoded FILE_SERVER_URL environment variable (pipeline.groovy, line ~16):
The URL is hardcoded as'http://fileserver.pingcap.net', which may cause issues if this needs to be changed per environment or region.
Suggestion: Consider parameterizing this URL via Jenkins parameters or config maps to increase flexibility.
Code Improvements
-
Use consistent workspace directory referencing with
REFS.repoeverywhere:
The PR updates manydir('tidb')todir(REFS.repo), which is good. Ensure this is consistent in all stages and scripts for future-proofing and to avoid confusion. -
Improve naming clarity for branch alias and job name:
Variables likeBRANCH_ALIASandJOB_BASE_NAMEappear in the pipeline.groovy but their values or origins are not shown in the diff. Make sure these are clearly documented or set explicitly near the top of the script for readability, e.g.:final BRANCH_ALIAS = 'latest' final JOB_BASE_NAME = 'ghpr_check2'
-
Avoid repeated shell script construction by extracting into separate functions or shared libs:
The archive and cloudevents steps involve complex shell scripts embedded as multi-line strings. Moving these to shared script files or Jenkins shared libraries will improve maintainability and testing. -
Add timeout or error handling around curl for cloudevents:
Currently the curl command has a 10s timeout but ignores errors due to|| true. Consider retry logic or at least logging failures for diagnostic purposes.
Best Practices
-
Add comments explaining key new environment variables and the purpose of Bazel-specific logic:
For example, the conditionif ("$SCRIPT_AND_ARGS".contains(" bazel_"))controls important test parsing and archiving behavior. Add a brief comment on why and when this condition is met. -
Add Jenkins pipeline unit or integration tests for the new pipeline.groovy logic:
If not already present, tests validating that the workspace directory switching, Bazel log parsing, and cloudevents sending work as intended would help prevent regressions. -
Follow Groovy/Jenkins style for string interpolation:
In some places double quotes are used without interpolation, or single quotes with interpolation (e.g.,'${VAR}'). Prefer double quotes with${VAR}for clarity and consistency. -
Document the removal of the Debug info stage clearly:
The debug stage was removed, presumably to clean up the pipeline. It is helpful to keep a comment in the Git history or in the pipeline describing why (e.g., no longer needed in prod).
Summary of recommended changes:
- Add logging/error handling around shell scripts that currently ignore failures.
- Parameterize hardcoded URLs like
FILE_SERVER_URL. - Ensure consistent variable initialization and usage (
BRANCH_ALIAS,JOB_BASE_NAME). - Extract complex shell commands to shared libs or scripts.
- Add comments explaining Bazel-specific conditional logic.
- Add or improve tests covering the refactored pipelines.
- Use consistent string interpolation style.
- Document the rationale for removed stages or relocated environment variables for future maintainers.
Addressing these points will improve reliability, maintainability, and clarity of the pipeline refactor.
Summary of ChangesHello @wuhuizuo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly refactors the Jenkins ghpr_check2 pipeline configuration for the pingcap/tidb repository. The changes aim to enhance the pipeline's structure and maintainability by centralizing path definitions, standardizing file naming conventions, and dynamically managing workspace directories. Furthermore, it introduces advanced post-build capabilities for Bazel tests, including conditional result parsing, CloudEvents integration, and improved log archiving, while also streamlining the pipeline by removing a debug stage and re-enabling a crucial fail-fast option. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request primarily refactors the Jenkins pipeline for the ghpr_check2 job to improve maintainability and flexibility. The changes include centralizing path definitions using variables, renaming files for consistency, and using dynamic repository references instead of hardcoded values. Additionally, a debug stage has been removed, and post-build steps for Bazel tests have been added. The changes are solid and align with the stated goals. I've identified a couple of minor issues where a hardcoded repository name was missed during the refactoring and have provided suggestions to address them.
| } | ||
| script { | ||
| if ("$SCRIPT_AND_ARGS".contains(" bazel_")) { | ||
| sh label: "Parse flaky test case results", script: './scripts/plugins/analyze-go-test-from-bazel-output.sh tidb/bazel-test.log || true' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The path to bazel-test.log is hardcoded as tidb/bazel-test.log. This is inconsistent with the refactoring's goal to eliminate hardcoded repository names. Please use the REFS.repo variable to construct the path dynamically.
sh label: "Parse flaky test case results", script: "./scripts/plugins/analyze-go-test-from-bazel-output.sh ${REFS.repo}/bazel-test.log || true"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR refactors the ghpr_check2 job pipeline to improve maintainability by centralizing path configurations and standardizing file naming conventions. The changes introduce parameterized variables for repository, branch, and job names, and migrate from hardcoded 'tidb' references to dynamic REFS.repo usage.
Key changes:
- Introduced
fullRepo,branchAlias, andjobNamevariables to centralize configuration - Renamed pipeline and pod files to standardized
pipeline.groovyandpod.yamlnames - Added conditional Bazel test parsing, cloudevents reporting, and enhanced log archiving for bazel tests
- Removed debug stage and relocated FILE_SERVER_URL environment variable
- Re-enabled
parallelsAlwaysFailFast()in the next-gen pipeline
Reviewed Changes
Copilot reviewed 3 out of 4 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
| pipelines/pingcap/tidb/latest/pull_integration_realcluster_test_next_gen/pipeline.groovy | Re-enabled parallelsAlwaysFailFast option |
| pipelines/pingcap/tidb/latest/ghpr_check2/pod.yaml | Added new standardized pod configuration with bazel support |
| pipelines/pingcap/tidb/latest/ghpr_check2/pipeline.groovy | Refactored to use dynamic paths, removed debug stage, relocated env vars, added conditional bazel test processing |
| jobs/pingcap/tidb/latest/ghpr_check2.groovy | Introduced parameterized variables and updated scriptPath to use standardized structure |
Comments suppressed due to low confidence (3)
pipelines/pingcap/tidb/latest/ghpr_check2/pipeline.groovy:126
- The hardcoded path
tidb/bazel-test.logis inconsistent with the refactoring goal of usingREFS.repoinstead of the hardcoded 'tidb'. This should be${REFS.repo}/bazel-test.logto align with the refactoring approach used elsewhere in the file (lines 104, 120, 143, 169).
pipelines/pingcap/tidb/latest/ghpr_check2/pipeline.groovy:160 - The hardcoded path
tidb/bazel-test.logis inconsistent with the refactoring goal of usingREFS.repoinstead of the hardcoded 'tidb'. This should be${REFS.repo}/bazel-test.logto align with the refactoring approach used elsewhere in the file (lines 104, 120, 143, 169).
pipelines/pingcap/tidb/latest/ghpr_check2/pipeline.groovy:158 - The shell variable reference
\$SCRIPT_AND_ARGSwill not be expanded correctly in the shell context becauseSCRIPT_AND_ARGSis a Jenkins matrix axis variable, not a shell environment variable. The escaped dollar sign\$means the shell will try to expand it, but the variable won't exist. This should use Groovy interpolation instead:"${SCRIPT_AND_ARGS}"(without the backslash). Compare with line 145 where$SCRIPT_AND_ARGSis correctly used for Groovy interpolation.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have already done a preliminary review for you, and I hope to help you do a better job.
Summary
This PR refactors the ghpr_check2 Jenkins pipeline job for the pingcap/tidb repo by centralizing path variables, renaming pipeline and pod files for consistency, and improving maintainability. It replaces hardcoded workspace directory names with dynamic references, adds conditional Bazel test result parsing and cloudevents integration, archives Bazel logs, removes a debug stage, relocates environment variables, and re-enables a parallelism option in a next-gen pipeline. Overall, the changes improve clarity and flexibility with minimal code complexity. The refactoring is cleanly done, but a few points can be improved for robustness, readability, and maintainability.
Critical Issues
-
Potential incorrect
grepusage in Tekton trigger (tekton/v1/triggers/triggers/_/github-pr-labeled-with-lgtm.yaml, line ~69)
The change from:gh pr view "$pr_url" --json labels --jq '.labels | map(.name)' | grep -E '^lgtm$'
to
gh pr view "$pr_url" --json labels --jq '.labels[].name' | grep -E '^lgtm$'
might cause false negatives or unexpected behavior if multiple labels exist because the latter outputs labels one per line, but
grep -E '^lgtm$'only matches exact lines. This is probably intended, but safer to usegrep -w 'lgtm'or consider usingjqentirely for filtering to avoid shell parsing errors.Suggestion:
Replace with a jq-only check:if gh pr view "$pr_url" --json labels --jq '.labels[].name' | grep -qw 'lgtm'; then
or directly in jq:
if gh pr view "$pr_url" --json labels --jq 'any(.labels[].name; . == "lgtm")'; then
Code Improvements
-
Use of hardcoded namespace string (pipelines/pingcap/tidb/latest/ghpr_check2/pipeline.groovy, line ~6)
TheK8S_NAMESPACEis hardcoded as"jenkins-tidb". Consider making this configurable or at least document its origin and expected usage for easier future changes. -
Repeated parsing of
JOB_SPECJSON (pipeline.groovy, multiple locations)
readJSON(text: params.JOB_SPEC)is called once and assigned toREFS. However, some fields likeREFS.repoandREFS.orgare used repeatedly. If the job spec might be large or costly to parse, cachingREFSis good, but watch out for potential null/missing fields or malformed JSON.Suggestion:
Add error handling or validation around the parsed JSON to avoid runtime failures. -
Shell script multiline strings and quoting (pipeline.groovy, post stages lines ~130-160)
The multiline shell scripts use a mix of double quotes and embedded variables, which can be error-prone.Example:
sh label: "archive log", script: """ str="$SCRIPT_AND_ARGS" logs_dir="logs_\${str// /_}" ... """
Ensure variable expansions behave as expected inside Groovy strings and shell scripts. Consider breaking complex scripts into separate scripts or better quoting/escaping to avoid subtle bugs.
-
Hardcoded 'tidb' directory fallback in checkout stage (pipeline.groovy, line ~54)
The checkout stage usesdir('tidb')instead ofdir(REFS.repo)used elsewhere. IfREFS.repocan differ, this leads to inconsistency.Suggestion:
Change to:dir(REFS.repo) { ... }
for consistency and correctness.
-
Repeated
dir(REFS.repo)blocks in post conditions (pipeline.groovy, lines 130-160)
Multipledir(REFS.repo)blocks in post conditions could be combined into a single block to reduce duplication and improve readability.
Best Practices
-
Missing or minimal comments in pipeline.groovy
The new conditional Bazel parsing and cloudevents sending block would benefit from comments explaining the purpose and when it triggers, e.g.:// If Bazel tests are run, parse flaky test results and send cloudevents report -
Test coverage for new Bazel features and archiving
There is no mention of adding or updating tests for the Bazel log parsing, cloudevents sending, or archiving logic. These are complex enough to warrant unit or integration tests (where feasible). -
Naming conventions: consistent variable casing
Variables likeK8S_NAMESPACE,GIT_FULL_REPO_NAME,BRANCH_ALIAS, andPOD_TEMPLATE_FILEare all uppercase constants, which is good. However,REFSis uppercase but appears to be a JSON object, which typically would be lowercase or camelCase.Suggestion:
RenameREFStorefsfor clarity that it’s a data object, or add a comment explaining the naming choice. -
Removal of debug stage without replacement
The debug stage with network commands (dig etc.) was removed. It might be useful to keep it as a disabled stage or documented snippet for troubleshooting. -
Re-enabling
parallelsAlwaysFailFastin next-gen pipeline (pipeline.groovy in pull_integration_realcluster_test_next_gen folder)
The comment is helpful, but consider adding a link to issue/PR or rationale for re-enabling for future reference.
Summary of Suggested Changes
diff --git a/pipelines/pingcap/tidb/latest/ghpr_check2.groovy b/pipelines/pingcap/tidb/latest/ghpr_check2.groovy
@@ -50,7 +50,7 @@ pipeline {
- dir('tidb') {
+ dir(REFS.repo) {
@@ -130,21 +130,25 @@ pipeline {
- if ("$SCRIPT_AND_ARGS".contains(" bazel_")) {
+ if ("$SCRIPT_AND_ARGS".contains(" bazel_")) {
// Parse flaky test case results and send cloudevents report
- sh label: "Parse flaky test case results", script: './scripts/plugins/analyze-go-test-from-bazel-output.sh tidb/bazel-test.log || true'
+ sh label: "Parse flaky test case results", script: "./scripts/plugins/analyze-go-test-from-bazel-output.sh ${REFS.repo}/bazel-test.log || true"
@@ -160,7 +164,10 @@ pipeline {
- environment {
- FILE_SERVER_URL = 'http://fileserver.pingcap.net'
- }
+ environment {
+ // URL to the file server used for artifacts
+ FILE_SERVER_URL = 'http://fileserver.pingcap.net'
+ }
diff --git a/tekton/v1/triggers/triggers/_/github-pr-labeled-with-lgtm.yaml b/tekton/v1/triggers/triggers/_/github-pr-labeled-with-lgtm.yaml
@@ -67,7 +67,11 @@ spec:
- if gh pr view "$pr_url" --json labels --jq '.labels[].name' | grep -E '^lgtm$'; then
+ if gh pr view "$pr_url" --json labels --jq 'any(.labels[].name; . == "lgtm")'; thenOverall, this PR is a solid refactor with some attention needed on minor consistency, robustness, and commenting. The Bazel-related additions are the most complex part and could benefit from clearer documentation and testing. The Tekton trigger fix should be tested thoroughly to avoid mislabeling.
`ghpr_check2` job Introduce fullRepo/branchAlias/jobName and centralize scriptPath and pod path. Rename pipeline and pod files to pipeline.groovy and pod.yaml. Use REFS.repo for workspace dirs instead of hardcoded 'tidb'. Add conditional Bazel test parsing, cloudevents send and archiving of bazel logs. Remove debug stage, relocate FILE_SERVER_URL env, and re-enable parallelsAlwaysFailFast in next-gen pipeline.
3588543 to
3fb7102
Compare
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have already done a preliminary review for you, and I hope to help you do a better job.
Summary:
This PR refactors the Jenkins pipeline for the ghpr_check2 job in the pingcap/tidb repo by centralizing path variables, renaming pipeline and pod files for consistency, and improving environment and post-build steps. It replaces hardcoded repo/workspace strings with variables, removes a debug stage, relocates environment variables, adds conditional Bazel test result parsing and archiving, and enables parallelsAlwaysFailFast in the next-gen pipeline. Overall, the changes improve maintainability and clarity but have a few areas that could be further improved for robustness and clarity.
Critical Issues
- Potential silent failure on Bazel log parsing and event sending (
pipelines/pingcap/tidb/latest/ghpr_check2/pipeline.groovy, lines ~126-140):
The scripts for analyzing Bazel test logs and sending Cloudevents are run with|| true, which swallows any errors silently.
Why: This could hide issues in test result parsing or event sending, making debugging harder.
Suggestion: Add logging or error handling to capture failures without breaking the pipeline, e.g.:Similarly for the curl command, consider redirecting verbose output to a log file for inspection.try { sh './scripts/plugins/analyze-go-test-from-bazel-output.sh tidb/bazel-test.log' } catch (err) { echo "Warning: Bazel test result parsing failed: ${err}" }
Code Improvements
-
Use consistent and robust way to check for Bazel tests (
pipeline.groovy, lines ~126 and ~153):
The condition"$SCRIPT_AND_ARGS".contains(" bazel_")depends on a substring match with a leading space, which might be brittle.
Suggestion: Define a helper function or regex to detect Bazel-related commands more reliably. For example:def isBazelTest(scriptArgs) { return scriptArgs =~ /\bbazel_/ // word boundary regex }
Then use
if (isBazelTest(SCRIPT_AND_ARGS)) { ... } -
Avoid repeating
dir(REFS.repo)in multiple stages:
The pipeline frequently usesdir(REFS.repo)in post steps.
Suggestion: Refactor to use a single wrapper stage or step block scoped todir(REFS.repo)to reduce duplication and chance of typos. -
Hardcoded FILE_SERVER_URL in environment block (
pipeline.groovy, line ~18):
The URL is hardcoded and duplicated (moved from earlier environment block).
Suggestion: Consider defining this as a global constant or parameter, or read from a config file or environment variable for easier maintenance. -
Inconsistent naming for variables (e.g.,
GIT_FULL_REPO_NAMEvsfullRepo):
The variable names differ in casing and format between job DSL and pipeline scripts.
Suggestion: Use a consistent naming convention (e.g., camelCase or UPPER_SNAKE_CASE) across all scripts for clarity.
Best Practices
-
Add comments explaining new Bazel-related post steps:
The new Bazel flaky test parsing and Cloudevents send logic would benefit from short comments explaining their purpose and expected inputs/outputs.
Example:// Parse Bazel flaky test results and send test-case-run-report event to Cloudevents server. -
Add tests or validation for new pipeline paths:
Since pipeline and pod files were renamed and paths centralized, it's important to ensure these paths are valid and tested in CI or staging environments. -
Document the purpose of
parallelsAlwaysFailFast()option inpull_integration_realcluster_test_next_gen:
The comment says "disable for debug" but now it is enabled. Clarify why enabling this is safe or desired. -
Standardize naming of pipeline and pod files:
The PR renames pod files frompod-ghpr_check2.yamltopod.yamlinside a job-named directory. This is clearer but should be documented as a convention.
Minor Suggestions
-
In the
archiveArtifactscalls for Bazel logs, consider specifyingallowEmptyArchive: trueconsistently to avoid build failures if no artifacts exist. -
In the
shscripts that parse logs and move files, consider addingset -euo pipefailor equivalent to catch errors early.
Overall, this PR meaningfully improves the maintainability and extendability of the ghpr_check2 pipeline. Addressing the above points will harden the pipeline, improve observability, and ease future maintenance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have already done a preliminary review for you, and I hope to help you do a better job.
Summary:
This PR refactors the ghpr_check2 Jenkins pipeline configuration for the PingCAP TiDB project by centralizing path definitions, renaming pipeline and pod files for consistency, and improving parameterization with variables like fullRepo, branchAlias, and jobName. It also introduces conditional Bazel test result parsing, cloudevents notifications, and better artifact archiving. The debug stage is removed, and environment variable placement is improved. Finally, the parallelsAlwaysFailFast option is re-enabled in the next-gen pipeline. The changes are well-structured and improve maintainability and extensibility.
Critical Issues
- None found. The logic improvements and centralization seem sound with no obvious bugs or security issues.
Code Improvements
-
Use of
REFS.repofor workspace directory (e.g., lines 114 and post sections inpipeline.groovy):- This is a positive change but relies on
REFS.repoalways being set correctly. Consider adding a validation or fallback to avoid failures ifREFS.repois empty or null. - Example:
def repoDir = REFS?.repo ?: 'tidb' dir(repoDir) { // ... }
- This is a positive change but relies on
-
Hardcoded URLs in environment variables (
FILE_SERVER_URL):- The URL
http://fileserver.pingcap.netis hardcoded in the environment block. If this URL could change, consider externalizing this to a config or parameter to make it flexible without code changes.
- The URL
-
Script path construction in
ghpr_check2.groovy:- Uses
"pipelines/${fullRepo}/${branchAlias}/${jobName}/pipeline.groovy"which improves maintainability. - Consider extracting a helper function or constant if other jobs follow the same pattern to reduce duplication.
- Uses
-
Bazel test parsing and cloudevents sending (lines 130-150 in pipeline.groovy):
- The use of
"$SCRIPT_AND_ARGS".contains(" bazel_")is a simple heuristic. Verify that this reliably captures all relevant Bazel tests, or else make the condition more robust. - The
curlcommand lacks error handling besides|| true. Consider logging errors or timeouts for better observability.
- The use of
-
Archiving Bazel logs (lines 148-165):
- The
archiveArtifactspattern uses globs like'*/bazel-*.log'. If artifacts are nested deeper or naming conventions vary, some logs might be missed. Confirm this matches all expected files.
- The
-
Removal of Debug Stage:
- The debug stage was removed. Ensure this debug info is not needed for troubleshooting. If needed, consider toggling it via a parameter or flag.
Best Practices
-
Documentation and Comments:
- Some new variables like
fullRepo,branchAlias, andjobNameare introduced at the top ofghpr_check2.groovy, but their purpose is only partially explained. Adding comments explaining their role and usage would help future maintainers. - The conditional Bazel test processing could be explained with a brief comment to clarify intent.
- Some new variables like
-
Testing Coverage:
- No indication of tests (unit or integration) for the new conditional logic or path refactoring. Consider adding tests or at least manual validation steps documented to verify the new pipeline behavior, especially for Bazel-related features.
-
Naming Conventions:
- The renaming of pod files and pipeline scripts to
pod.yamlandpipeline.groovyrespectively is consistent and clear. - The environment variable
FILE_SERVER_URLis appropriately named.
- The renaming of pod files and pipeline scripts to
-
Code Duplication:
- Similar repo and path variables appear in multiple places (
ghpr_check2.groovyandpipeline.groovy). Consider centralizing these definitions in a shared config or Groovy file to DRY up the code.
- Similar repo and path variables appear in multiple places (
-
Style:
- The pipeline script uses consistent indentation and spacing.
- The removal of commented-out code (e.g., disabled debug stage) is good for cleanliness.
Summary of actionable suggestions:
- Add validation or fallback for
REFS.repousage indir()calls to prevent potential failures. - Externalize hardcoded URLs like
FILE_SERVER_URLinto parameters or config files. - Add comments explaining new variables and conditional Bazel handling logic.
- Improve error handling and logging for the cloudevents
curlcommand. - Ensure artifact archive globs cover all relevant files; verify paths.
- Consider re-adding debug info under a toggle or parameter.
- Centralize path and repo constants to avoid duplication across multiple scripts.
- Add or document tests/validation steps for Bazel and path refactor features.
Introduce fullRepo/branchAlias/jobName and centralize scriptPath and pod path. Rename pipeline and pod files to pipeline.groovy and pod.yaml. Use REFS.repo for workspace dirs instead of hardcoded 'tidb'. Add conditional Bazel test parsing, cloudevents send and archiving of bazel logs. Remove debug stage, relocate FILE_SERVER_URL env, and re-enable parallelsAlwaysFailFast in next-gen pipeline.