Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prow: handle the case of re-triggering an expired GitHub workflow #192

Open
MadhavJivrajani opened this issue Jun 14, 2024 · 2 comments
Open
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/contributor-experience Categorizes an issue or PR as relevant to SIG Contributor Experience. sig/testing Categorizes an issue or PR as relevant to SIG Testing.

Comments

@MadhavJivrajani
Copy link
Contributor

Original issue here: kubernetes/test-infra#31645


kubernetes/test-infra#31132 added the functionality of re-triggering only failed jobs in a GitHub workflow. However, the trigger plugin does not handle the cases when the failed workflow is expired (came up in discussion here kubernetes/test-infra#30054 (comment), cc @Priyankasaggu11929)

Workflows or jobs in workflows can only be re-run within min(30 days, log retention period) (or atleast that is my understanding based on https://docs.github.com/en/actions/managing-workflow-runs/re-running-workflows-and-jobs). Considering our repos have a 90 day log retention period, seems like we can only ever re-run jobs in a 30 day time-frame.

Regardless, if a repo uses prow to trigger GitHub workflows, /retest on expired workflows runs will yield no result from prow. We should probably make the bot post a comment indicating that one or more workflow run has expired.

Step 1: Modify the trigger logic here to include a comment if at least one of the runs is expired:

if err := c.GitHubClient.TriggerFailedGitHubWorkflow(org, repo, runID); err != nil {

The status code returned by an API call to re-run a failed, expired workflow run is 403 (unfortunately, GitHub does not document this), with an error message. There are 2 issues with trying to rely on this returned 403:

  1. 403 can indicate the need to retry the request. The Prow GitHub client will attempt a retry if status code of 403 is returned, but it will retry only if the error returned by the sending the request is nil

prow/pkg/github/client.go

Lines 946 to 958 in 924d31f

// Retry 404s a couple times. Sometimes GitHub is inconsistent in
// the sense that they send us an event such as "PR opened" but an
// immediate request to GET the PR returns 404. We don't want to
// retry more than a couple times in this case, because a 404 may
// be caused by a bad API call and we'll just burn through API
// tokens.
c.logger.WithField("backoff", backoff.String()).Debug("Retrying 404")
c.time.Sleep(backoff)
backoff *= 2
} else if resp.StatusCode == 403 {
if resp.Header.Get("X-RateLimit-Remaining") == "0" {
// If we are out of API tokens, sleep first. The X-RateLimit-Reset
// header tells us the time at which we can request again.

  1. The error returned by executing a request to rerun an expired workflow run is non-nil, to verify I ran the following command:
❯ curl -L -X POST -s -w "%{http_code}\n" \                                                                                                                                     ─╯
  -H "Accept: application/vnd.github+json" \
  -H "Authorization: Bearer MY_GITHUB_TOKEN" \
  -H "X-GitHub-Api-Version: 2022-11-28" \
  https://api.github.com/repos/kubernetes/utils/actions/runs/5577562429/rerun-failed-jobs

Note: you may not be able to run the exact same command above since you may not have write access on kubernetes/utils.

The output:

{
  "message": "Unable to retry this workflow run because it was created over a month ago",
  "documentation_url": "https://docs.github.com/rest/actions/workflow-runs#re-run-failed-jobs-from-a-workflow-run"
}
403

The same error is propagated into the Prow GitHub client as well.

As a result, the way the request is handled in the Prow client is that if an error is returned after all retries are done, it returns a status code of 0, indicating that the client could not communicate with the server using HTTP (which is clearly not true). I've demonstrated this here with a unit test: MadhavJivrajani/test-infra@38c5f90

The only way (for now) to know if we are trying to rerun an expired workflow run is to parse the error message itself. We can probably do:

strings.Contains(err.Error(), "Unable to retry this workflow run because it was created over a month ago")

If the above condition is true, post a comment saying there are expired workflow runs.
It might also be worth including what can be done to mitigate this, one popular workaround I see online is to rebase or force push an empty commit.

Step 2: Add tests

There aren't any tests for triggering GitHub workflows in general. If we do this, we need to ensure we have test coverage.

Follow ups

Its worth looking into the returning of status code 0 considering this is a valid use case, but this is a follow up.
Also - note that we will end up retrying and backing off every time we try to rerun an expired workflow run. Maybe there's a way to short-circuit that.

/sig contributor-experience testing

@k8s-ci-robot k8s-ci-robot added sig/contributor-experience Categorizes an issue or PR as relevant to SIG Contributor Experience. sig/testing Categorizes an issue or PR as relevant to SIG Testing. labels Jun 14, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 12, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/contributor-experience Categorizes an issue or PR as relevant to SIG Contributor Experience. sig/testing Categorizes an issue or PR as relevant to SIG Testing.
Projects
None yet
Development

No branches or pull requests

3 participants