You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As brought up in a recent slack conversation, an uptick in 500 errors on the Gitlab side recently seems to be manifesting as an increased number of job failures from tap-gitlab. As proposed in that thread, this could be an opportunity to improve network handling and retries within tap-gitlab.
The SDK refactor (#34) might relate to this as well. While the SDK does have built-in retry capability with backoff, we'd have to make sure the correct errors would get retries, according to the error codes gitlab is returning.
The text was updated successfully, but these errors were encountered:
In GitLab by @aaronsteers on Mar 12, 2021, 17:28
As brought up in a recent slack conversation, an uptick in 500 errors on the Gitlab side recently seems to be manifesting as an increased number of job failures from tap-gitlab. As proposed in that thread, this could be an opportunity to improve network handling and retries within
tap-gitlab
.The SDK refactor (#34) might relate to this as well. While the SDK does have built-in retry capability with backoff, we'd have to make sure the correct errors would get retries, according to the error codes gitlab is returning.
The text was updated successfully, but these errors were encountered: