Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Forever growing index #219

Open
surkova opened this issue Sep 5, 2023 · 4 comments
Open

Forever growing index #219

surkova opened this issue Sep 5, 2023 · 4 comments

Comments

@surkova
Copy link

surkova commented Sep 5, 2023

We have a use case when we have an endless stream of MinHashes which we continuously compare against MinHashes we have seen before. If we haven't seen it, we add it to the index. We are using Redis as our backend and from time to time we need to switch instances because they reach 32Gb in size and cannot grow more (today this takes about 4 months for us, but we get more data day by day). For our use case it would be ideal if we could specify a last_seen key for a MinHash to implement an eviction policy, but for all I understand this is not possible? Or is it?

@ekzhu
Copy link
Owner

ekzhu commented Sep 5, 2023

That's a great scenario. The library currently doesn't handle eviction of keys. Is it possible for you to implement it around the library using the delete function of the index? I know it would be great to utilize redis' EXPIRE keys but not sure how to implement that to play well with the LSH index itself.

@surkova
Copy link
Author

surkova commented Sep 6, 2023

Thanks for a swift reply. In order to delete something, we need to know when it was added to index, so the only way I see is to alter the key used to add MinHash to LSH to contain a timestamp, and as we work with the data we would be constantly deleting and adding back to the index with the updated key. Not really optimal and easy to work with solution.

@ekzhu
Copy link
Owner

ekzhu commented Sep 6, 2023

What is the typical window, in terms of number of minhashes? Is there a way to time-partition the data stream so you can expire partitions as they age over time.

@dexterfichuk
Copy link

It's always increasing but we would expire based on when we last see a value. We're using the LSH as a clustering mechanism right now, so we do a search, and if we get anything within the similarity score, then the minhash belongs to that cluster. If we don't have any values map to that cluster in x many days we would like to purge that cluster.

We're looking at doing our own datastore build on redis sorted sets to keep tabs on when we last see clusters, but this would be great if it could be coded into the existing structure, but looking at the code i see the difficulties with it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants