You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 27, 2021. It is now read-only.
Bigtable server-side and client-side & Heroic-side behaviour analysis
I am a heroic dev, implementing shorter timeouts
Who wants to know how heroic will react to these exceptions
So that I can be confident that Heroic will not be negatively impacted by rolling out the shorter timeouts & retries
Proposed Solution
Clone Adam’s fork of the java-bigtable client lib (see below) and use the integration test in this patch file to provoke a BigtableRetriesExhaustedException and observe how Heroic responds to it.
Design & Implementation Notes
note that the above test will need to be changed to better replicate a user query coming into the API as we need to see the full impact of this exception, not just in an isolated test context
here are Adam’s instructions from Slack :
git clone https://github.com/AdamBSteele/google-cloud-go
cd google-cloud-go/bigtable/cmd/emulator
go run . --inject-latency="ReadRows:p50:100ms" --inject-latency="ReadRows:p99:5s"
The text was updated successfully, but these errors were encountered:
sming
changed the title
Discover how Heroic reacts to BigtableRetriesExhaustedException
Bigtable server-side and client-side & Heroic-side behaviour analysis
Jan 21, 2021
Adam says that 1.18.1 results in 65 * 2 retries and strongly recommends we upgrade to that from 1.12.1. It also has 2 other semi-critical bugfixes so I think we should.
Bigtable server-side and client-side & Heroic-side behaviour analysis
Proposed Solution
Design & Implementation Notes
The text was updated successfully, but these errors were encountered: