You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When we're adding new events into the linked chunk, we try to minimize storing of back-pagination prev-batch tokens:
when pushing to the tail, if all events have been duplicated, independently of ordering, we won't deduplicate/remove them, and we'll not store the prev-batch token.
when inserting a chunk of events from a recent back-pagination, and they've all been deduplicated, we won't deduplicate/remove them, and we'll not store the prev-batch token.
Unfortunately, this is not as simple as that: we should also check the orderings of the events that have been deduplicated, in particular that they're indeed ordered the same way they're ordered in the linked chunk representation. If that's not the case, then we should keep the prev-batch token and keep on iterating until we get the same events in the same ordering.
When we're adding new events into the linked chunk, we try to minimize storing of back-pagination
prev-batch
tokens:Unfortunately, this is not as simple as that: we should also check the orderings of the events that have been deduplicated, in particular that they're indeed ordered the same way they're ordered in the linked chunk representation. If that's not the case, then we should keep the prev-batch token and keep on iterating until we get the same events in the same ordering.
Related to #3280.
cc @Hywan
The text was updated successfully, but these errors were encountered: