Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Early caching unexpected behavior: Altered keys breaking other api calls #265

Open
sohang123 opened this issue Aug 20, 2024 Discussed in #264 · 10 comments
Open

Early caching unexpected behavior: Altered keys breaking other api calls #265

sohang123 opened this issue Aug 20, 2024 Discussed in #264 · 10 comments
Assignees
Labels
bug Something isn't working

Comments

@sohang123
Copy link

Discussed in #264

Originally posted by sohang123 August 20, 2024
We were originally using simple caching with both a redis and memory (prefix mem:) backend. After switching to using cache.early 1. the keys were altered from e.g "2024-08-19T19:26:04:xyz" to "2024-08-19T19:26:04:early:v2:xyz". Also nothing was put into in memory cache. Instead there was a duplicate in redis. In redis we saw bot the "2024-08-19T19:26:04:early:v2:xyz" and "2024-08-19T19:26:04:early:v2:mem:xyz" keys. Anyone have any tips on how to work with the early caching so that we can have the original keys and both in memory and redis caching working?

@sohang123
Copy link
Author

sohang123 commented Aug 20, 2024

Here is how we set up the backends:

redis_backend = cache.setup(
redis_url,
client_name=None,
suppress=False,
client_side=False,
middlewares=(add_prefix(f"{build_stamp}:"),),
)

mem_backend = cache.setup(
"mem://?check_interval=10&size=100000", prefix="mem:", suppress=False, enable=enable_in_memory_cache
)

This is how we are using early

Repalced:

@cache(ttl="1h", key=f"mem:{key}")
@cache(ttl="1h", key=key)

With:

@cache.early(ttl="1h", early_ttl="40m", key=f"mem:{key}")
@cache.early(ttl="1h", early_ttl="40m", key=key)

@sohang123
Copy link
Author

Update: Looks like if we do
mem_backend = cache.setup(
"mem://?check_interval=10&size=100000", prefix="early:v2:mem:", suppress=False, enable=enable_in_memory_cache
)

It works. Looks like this prefix appending causes us to have to remember to add this prefix everywhere. I don't know if this is intended behavior or not.

@sohang123 sohang123 changed the title Early caching unexpected behavior: Altered keys and skipping mem backend Early caching unexpected behavior: Altered keys causing need to add prefix in all api calls Aug 20, 2024
@sohang123
Copy link
Author

Seems like we can use set methods along side early caching. Here is a test example:

`foo_key = "foo:{id}"

async def set_foo_by_id():
await cache.set_many(
{("early:v2" + foo_key.format(id=id)): str(id) for id in [1,2,3]},
expire="1h",
)

@cache.early(ttl="1h", early_ttl="40m", key=foo_key, condition=NOT_NONE)
async def get_foo_by_id(id):
return str(id)

await set_foo_by_id()
print(await get_foo_by_id(1))`

Throws error:

/Users/sohanggandhi/fh-repos/fh-mono/prompt_server/.venv/lib/python3.11/site-packages/cashews/de │ prompt-server-online | │ corators/cache/early.py:88 in _wrap │ prompt-server-online | ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ prompt-server-online | ValueError: too many values to unpack (expected 2)

@sohang123 sohang123 changed the title Early caching unexpected behavior: Altered keys causing need to add prefix in all api calls Early caching unexpected behavior: Altered keys breaking other api calls Aug 22, 2024
@Krukov
Copy link
Owner

Krukov commented Aug 22, 2024

Hello, thanks for reporting

Looks like this prefix appending causes us to have to remember to add this prefix everywhere. I don't know if this is intended behavior or not.

In general, this is how it was intended, but now I understand that it is not very "user friendly". So probably I will fix it somehow. I did this because values ​​in the cache for the regular cache and early stored differently, which can lead to errors

For now I can suggest to remove this prefix by passing empty string (prefix="")

@cache.early(ttl="1h", early_ttl="40m", key=f"mem:{key}", prefix="")
@cache.early(ttl="1h", early_ttl="40m", key=key, prefix="")

@Krukov Krukov self-assigned this Aug 22, 2024
@sohang123
Copy link
Author

@Krukov thanks for getting back to us. We already tried as you suggest but even with prefix="", the prefix "v2:" is still added. So doesn't fix the issue.

@sohang123
Copy link
Author

@Krukov is it possible to remove the "v2:"?

@Krukov
Copy link
Owner

Krukov commented Sep 8, 2024

@sohang123 Nope it is not possible for now - I've added it for some reason. Now I gonna to remove it soon

@Krukov Krukov added bug Something isn't working and removed validation labels Sep 8, 2024
@sohang123
Copy link
Author

@Krukov the other thing we need is to be able to set keys with early caching. It doesn’t work now because early caching stores some additional structure with the values.

@Krukov
Copy link
Owner

Krukov commented Sep 8, 2024

yes, it is internal part of early caching. We need to store somewhere the time when the cache will be considered prematurely invalid ( early ttl ). Ideologically, decorators provide API as is, it is not expected that developers will mix low level api ( like get, set, incr) with high level api (like decorators). So can you describe a case that you try to solve ?

@sohang123
Copy link
Author

We have both an api to fetch resource by key and in batch. When we fetch by batch we want to store each individual value in the batch according to its key so we loop over the batch fetched from db and call set.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants