You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now the crl-updater in continuous mode sets up each shard to update every N minutes. But we don't want all shards to update right away when crl-updater starts so we can spread the work out across the update period. So each shard runs at an offset randomly determined at startup.
When crl-updater is restarted, some shards may be delayed by more than updatePeriod. If a shard was about to be updated when crl-updater shut down, and the next instance of crl-updater randomly schedules that shard late in the cycle, that shard could go almost 2 * updatePeriod without being updated. If the crl-updater is restarted multiple times, the amount of delay could be arbitrarily long.
This is somewhat mitigated by having two crl-updaters running in different datacenters, with different sets of randomized offsets. If one datacenter's crl-updater goes through a series of restarts but the other remains running, all shards should still be updated every updatePeriod.
We should consider alternate approaches for scheduling shard updates, like sorting the shards from "most stale" to "least stale' based on the thisUpdate field in the crlShards table. Note that this approach would wind up with both datacenters updating most shards at the same time, duplicating work.
The text was updated successfully, but these errors were encountered:
Right now the crl-updater in continuous mode sets up each shard to update every N minutes. But we don't want all shards to update right away when crl-updater starts so we can spread the work out across the update period. So each shard runs at an offset randomly determined at startup.
When crl-updater is restarted, some shards may be delayed by more than
updatePeriod
. If a shard was about to be updated when crl-updater shut down, and the next instance of crl-updater randomly schedules that shard late in the cycle, that shard could go almost 2 *updatePeriod
without being updated. If the crl-updater is restarted multiple times, the amount of delay could be arbitrarily long.This is somewhat mitigated by having two crl-updaters running in different datacenters, with different sets of randomized offsets. If one datacenter's crl-updater goes through a series of restarts but the other remains running, all shards should still be updated every
updatePeriod
.We should consider alternate approaches for scheduling shard updates, like sorting the shards from "most stale" to "least stale' based on the
thisUpdate
field in thecrlShards
table. Note that this approach would wind up with both datacenters updating most shards at the same time, duplicating work.The text was updated successfully, but these errors were encountered: