Otter is now available as a cache option in Gubernator V3 #15
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Purpose
The result of the #7 benchmark for WorkerPool and Cache implementations showed a significant increase in performance when using Otter https://maypok86.github.io/otter/ over a standard LRU cache implementation. This PR gives users the option of using either the Mutex or Otter cache implementations
Implementation
WorkerPool
implementation as that showed the worst performanceCacheManager
which takes a similar role to theWorkerPool
and provides an abstraction point for possible future management of cache types.LRUCacheCollector
toCacheCollector
algorithms.go
functions now lock a rate limit before modifying theCacheItem
. This avoids race conditions created when using a lock free cache like Otter.algorithms.go
. This reduces the garbage collection burden by no longer dropping expired cache items from the cache. Now, if an item is expired, it remains in the cache until normal cache sweep clears it, or it's accessed again. If it's accessed again, the existing item is updated and gets a new expiration time.rateContext
struct which encapsulates all the state that must pass between several functions inalgorithms.go
algorithms.go
now call themselves recursively in order to retry when a race condition occurs. Race conditions can occur when using lock less data structures like Otter. When this happens, we simply retry the method by calling it recursively. This is a common pattern, often used by prometheus metrics.b.RunParallel()
when preforming concurrent benchmarks.TestHighContentionFromStore()
to trigger race conditions inalgorithms.go
which also increases code coverage.GUBER_CACHE_PROVIDER
which defaults tootter