-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Redis 3.0 and Zabbix monitoring #9
Comments
To fix this, I had to perform a FLUSHDB on the cluster master. This is less than ideal. I'll look through the code to find what keys this Zabbix python script is using and see if I can narrow it down. |
After some addition use, it error occurs when a cluster slave failovers as the new master. The error then occurs on both slaves. Even after performing a 'cluster failover' back to the original master, the two slaves continue to produce this error while the master is fine. We are no longer able to monitor the clustered slaves as no new data is able to make it back to the Zabbix server. |
OK I may have found the culprit. The client.keys(*) appears to be the issue. In a clustered state (sharding) keys can exist on another node. I commented out the following: 134 #keys = client.keys('*')
135 #llensum = 0
136 #for key in keys:
137 # if client.type(key) == 'list':
138 # llensum += client.llen(key)
139 #a.append(Metric(redis_hostname, 'redis[llenall]', llensum)) And I'm no longer getting these errors, and data is making its way back to the zabbix server. |
We're using the recently released Redis 3.0 with clustering enabled.
I have Zabbix monitoring configured via cron, pushing data to our Zabbix server.
Every so often a key the zabbix python script sends to the localhost redis node errors:
This is on a cluster slave. The IP is the cluster master.
The text was updated successfully, but these errors were encountered: