-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Early caching unexpected behavior: Altered keys breaking other api calls #265
Comments
Here is how we set up the backends:
This is how we are using early Repalced:
With:
|
Update: Looks like if we do It works. Looks like this prefix appending causes us to have to remember to add this prefix everywhere. I don't know if this is intended behavior or not. |
Seems like we can use set methods along side early caching. Here is a test example: `foo_key = "foo:{id}" async def set_foo_by_id(): @cache.early(ttl="1h", early_ttl="40m", key=foo_key, condition=NOT_NONE) await set_foo_by_id() Throws error:
|
Hello, thanks for reporting
In general, this is how it was intended, but now I understand that it is not very "user friendly". So probably I will fix it somehow. I did this because values in the cache for the regular cache and early stored differently, which can lead to errors For now I can suggest to remove this prefix by passing empty string ( @cache.early(ttl="1h", early_ttl="40m", key=f"mem:{key}", prefix="")
@cache.early(ttl="1h", early_ttl="40m", key=key, prefix="") |
@Krukov thanks for getting back to us. We already tried as you suggest but even with prefix="", the prefix "v2:" is still added. So doesn't fix the issue. |
@Krukov is it possible to remove the "v2:"? |
@sohang123 Nope it is not possible for now - I've added it for some reason. Now I gonna to remove it soon |
@Krukov the other thing we need is to be able to set keys with early caching. It doesn’t work now because early caching stores some additional structure with the values. |
yes, it is internal part of early caching. We need to store somewhere the time when the cache will be considered prematurely invalid ( early ttl ). Ideologically, decorators provide API as is, it is not expected that developers will mix low level api ( like get, set, incr) with high level api (like decorators). So can you describe a case that you try to solve ? |
We have both an api to fetch resource by key and in batch. When we fetch by batch we want to store each individual value in the batch according to its key so we loop over the batch fetched from db and call set. |
Discussed in #264
Originally posted by sohang123 August 20, 2024
We were originally using simple caching with both a redis and memory (prefix mem:) backend. After switching to using cache.early 1. the keys were altered from e.g "2024-08-19T19:26:04:xyz" to "2024-08-19T19:26:04:early:v2:xyz". Also nothing was put into in memory cache. Instead there was a duplicate in redis. In redis we saw bot the "2024-08-19T19:26:04:early:v2:xyz" and "2024-08-19T19:26:04:early:v2:mem:xyz" keys. Anyone have any tips on how to work with the early caching so that we can have the original keys and both in memory and redis caching working?
The text was updated successfully, but these errors were encountered: