Skip to content
reisenberger edited this page Dec 9, 2017 · 36 revisions

Cache (v5.4.0 onwards)

Purpose

To provide results from cache where available.

Premise: 'Some proportion of requests may be similar'

The Polly CachePolicy is an implementation of read-through cache, also known as the cache-aside pattern. Providing results from cache where possible reduces overall call duration and can reduce network traffic.

Retrieving a result from an in-memory cache can eliminate a downstream call entirely. A distributed cache can be used to provide a shared cache across upstream nodes; to retrieve values from a nearer-by network resource than the underlying called system might be; or where caching requirements exceed in-memory storage.

Working with CacheProviders

Polly CachePolicy operates in conjunction with an ISyncCacheProvider or IAsyncCacheProvider implementation.

The following existing implementations are available via separate nuget packages:

Nuget Documentation
and Github repo
Description Supported targets
Polly.Caching.MemoryCache Polly.Caching.Memorycache An in-memory cache implementation using the standard .NET Framework / .NET Core MemoryCache providers .NET 4.0
.NET 4.5
.NET Standard 1.1 (supports .NET Core and Xamarin)
Polly.Caching.IDistributedCache Polly.Caching.idistributedcache Supports any implementation of Microsoft.Extensions.Caching.Distributed.IDistributedCache, including the Redis implementation and SQL-server-based implementations that Microsoft provides .NET Standard 1.1 (supports .NET Core and Xamarin)

New cache providers are also easy to implement against the ISyncCacheProvider or IAsyncCacheProvider interfaces.

Operation

  • The cache key to use is determined according to the supplied (or default) cache key strategy.
  • Where the cache holds a value under the corresponding key:
    • the delegate passed to .Execute(...) or similar is not called
    • the value from cache is returned instead.
  • Where the cache does not hold a result under the corresponding key:
    • the delegate passed to .Execute(...) or similar is called as usual
    • the retrieved value is put in the cache, using the configured time-to-live strategy
    • the retrieved value is returned.

Syntax

CachePolicy cache = Policy
  .Cache(ISyncCacheProvider cacheProvider
           , TimeSpan ttl | ITtlStrategy ttlStrategy
          [, ICacheKeyStrategy cacheKeyStrategy]
              [, Action<Context, string, Exception> onCacheError]
                   |
              [, Action<Context, string> onCacheGet
               , Action<Context, string> onCacheMiss
               , Action<Context, string> onCachePut
               , Action<Context, string, Exception> onCacheGetError 
               , Action<Context, string, Exception> onCachePutError]
           );

CachePolicy cache = Policy
  .CacheAsync(IAsyncCacheProvider cacheProvider
           , TimeSpan ttl | ITtlStrategy ttlStrategy
          [, ICacheKeyStrategy cacheKeyStrategy]
              [, Action<Context, string, Exception> onCacheError]
                   |
              [, Action<Context, string> onCacheGet
               , Action<Context, string> onCacheMiss
               , Action<Context, string> onCachePut
               , Action<Context, string, Exception> onCacheGetError 
               , Action<Context, string, Exception> onCachePutError]
           );

Parameters

cacheProvider

cacheProvider: The underlying cache provider to use.

CachePolicy must be used in conjunction with an ISyncCacheProvider or IAsyncCacheProvider implementation: existing providers are available via Nuget (see above) or you may implement your own.

The same cacheProvider and CachePolicy instance may be used across multiple call sites.

ttl

TimeSpan ttl: Time-to-live (ttl) for the cache item, as a relative, non-sliding duration from the moment the item is put in the cache.

For example, if TimeSpan.FromMinutes(5) is passed, the cacheProvider should consider the item valid for 5 minutes.

ttlStrategy (alternative to ttl above)

ITtlStrategy ttlStrategy: offers ttl strategies beyond the simple TimeSpan ttl above.

RelativeTtl

RelativeTtl(TimeSpan ttl): equivalent to ttl above.

AbsoluteTtl

AbsoluteTtl(DateTimeOffset absoluteExpirationTime): indicates that the cacheProvider should make the cached item expire at the absolute time given.

SlidingTtl

SlidingTtl(TimeSpan slidingTtl): indicates that the cacheProvider should treat the cached item as having a sliding ttl of the specified timespan. For instance, if TimeSpan.FromMinutes(5) is passed, the cacheProvider should consider the item valid for a further 5 minutes, each time the cache item is touched.

ContextualTtl

ContextualTtl: specifies that the execution should take the ttl from a property on the Context passed to execution, context[ContextualTtl.TimeSpanKey].

This allows you to define a central cache policy that will use varying ttls in different call sites, by placing the desired ttl on Polly's execution context. For example:

context[ContextualTtl.TimeSpanKey] = new Ttl(TimeSpan.FromMinutes(5), slidingExpiration: true);

Default cacheKeyStrategy (if omitted)

If no cacheKeyStrategy is specified, the cache key to use is taken as the ExecutionKey property on the execution Context, ie context.ExecutionKey. For example:

TResult result = await cache.ExecuteAsync(async () => await getFooAsync(), new Context("FooKey")); // "FooKey" is the cache key to use in this execution.

If context.ExecutionKey is not specified (no Context is passed to the execution, or context.ExecutionKey is not set), caching behaviour is ignored, and the underlying delegate passed to .Execute(...) (or similar) is called.

cacheKeyStrategy (optional)

Func<Context, string> cacheKeyStrategy: allows the specification of a custom strategy for using a more specific cache key in the execution. For instance, to cache items obtained through the execution by some guid:

 // configuration
 CachePolicy cache = Policy.CacheAsync(cacheProvider, TimeSpan.FromMinutes(5), context => context.ExecutionKey + context["guid"]);

 // usage, elsewhere
 Guid guid = ... // from somewhere
 Context policyExecutionContext = new Context("GetResource-");
 policyExecutionContext["guid"] = guid.ToString();
 TResult result = await cache.ExecuteAsync(async () => await getResourceAsync(guid), policyExecutionContext); // "Resource-SomeGuid" is the key used in this execution, if guid == SomeGuid.

ICacheKeyStrategy cacheKeyStrategy: is available as a parameter in some overloads, for more complex funcs.

Interacting with policy operation

onCacheGet

An optional onCacheGet delegate allows specific code to be executed (for example for logging), when a value is retrieved from cache.

onCacheMiss

An optional onCacheMiss delegate allows specific code to be executed (for example for logging), when a cache-miss occurs (a value is not found in the cache for the given key).

onCachePut

An optional onCachePut delegate allows specific code to be executed (for example for logging), after a value has been put to the cache.

onCacheError

An optional onCacheError delegate allows specific code to be executed (for example for logging), if any call to the underlying cacheProvider throws an exception.

The string passed is the cache key.

onCacheGetError

The alternative, optional onCacheGetError delegate is a more specific version of onCacheError, executed only if get calls to the underlying cacheProvider throw an exception.

onCachePutError

The alternative, optional onCachePutError delegate is a more specific version of onCacheError, executed only if put calls to the underlying cacheProvider throw an exception.

Throws

No exceptions due to caching operations are thrown. If the underlying cacheProvider throws an exception during a cache operation:

  • the exception is passed to the relevant onCacheError, onCacheGetError or onCachePutError delegate, if configured.
  • the execution continues. For example, if the underlying cacheProvider throws while checking if the cache contained a value for the given key, the execution treats this as a cache-miss, and calls the delegate passed to .Execute(...).

Usage recommendations

Placement of cache policy within PolicyWrap

See guidance on ordering the available policy types in a wrap. CachePolicy should be usually be placed outermost in a PolicyWrap, with only FallbackPolicy outside.

void executions

If an execution returning void is placed through a CachePolicy, caching operation is silently bypassed (there is no result to cache) rather than an exception thrown. This allows for a CachePolicy to be included in a PolicyWrap which might sometimes be used for TResult-returning executions, sometimes for void-returning, without exceptions being thrown.

Thread safety and policy reuse

Thread safety

The internal operation of CachePolicy is thread-safe: multiple calls may safely be placed concurrently through a policy instance (assuming the configured cacheProvider implementation is also thread-safe).

Policy reuse

CachePolicy instances may be re-used across multiple call sites.

cacheProvider instances may be re-used across multiple CachePolicys and call sites.

When reusing policies, use differing ExecutionKey to specify cache key (if DefaultCacheKeyStrategy is used), and to distinguish different call-site usages within logging and metrics.

Implementing cache providers and serializers

Cache providers

See Implementing new cache providers

Creating and using serializers

See Implementing cache serializers

Clone this wiki locally