Failed blocks always after transfer to block storage #8317
Unanswered
LePau
asked this question in
Help and support
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am trying (but failing) to achieve data persistency for mimir across pod restarts (i.e. whether uptime is 3h or 3d).
I have tried both forcing the flush via
POST {{MIMIR_URL}}/ingester/flush?wait=true
and letting the compaction happen automatically at around 3 hours. Same results either way.I have tried both defaults and custom values for
query_store_after
,sync_interval
, andcleanup_interval
. The end result is always the same.I am running minio, mimir, and azurite within a minikube instance and minio/azurite PVCs are backed by NFS (nfs-csi, ReadWriteMany).
Any ideas?
Here is the curl command and successful response before data has been sent to block storage:
Here is the curl command and response when it errors (after data has been sent to block storage):
Here is the error from the mimir log (via kubectl):
Edit: I forgot to include something the first time around, and lost the original block ID, but repeated the issue with a new failed block ID and am including the output here. This is the listing on the minio buckets (where the PVC is mounted at /data). The moral of the story here is to show that mimir is indeed writing indexes and blocks into the minio storage - and the error comes later when it tries to read them out. I'm not sure if there is a way to validate integrity of these files.
Here is the mimir config:
Here is the kubernetes deployment yaml for mimir:
Beta Was this translation helpful? Give feedback.
All reactions