-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changing resource partition count via Helix Rest does not work reliably #2793
Comments
@wmorgan6796 did you use the API create/delete resource? For partition placement, change ResourceConfig does not work. It should be something in IdealState. You should update Ideal with the right partition number. |
I changed both the ideal state and the resource config |
@wmorgan6796 Is this cluster in normal state? Means: 1) has a live controller There are multiple cases can lead the partition not change. This requires some understand and debug with your controller log. |
Any update for this? @wmorgan6796 |
Sorry I’ve been on leave for a bit and haven’t had time to come back to this. but to answer the question: Cluster was working |
Interesting. I cannot reproduce it in my local. One possible situation could be you add/delete the same resource very close and almost same time. But the controller is busy with handling other notification. Helix use selective update for the metadata:
Even your number of partition has been changed, from ZK PoV, it is not data change and will not trigger refresh of data. In this case, either you can create the resource with different name like add some version to differentiate resource. Or add logic to make sure participants start dropping partitions then create the new ones with same name. |
Any update? |
Feel free to reopen it if you have further update. |
Describe the bug
When updating the resource configuration and ideal state via the rest API, specifically for the number of partitions for a resource, I find that it does not reliably create and assign the new partitions, requiring a full re-creation of the resource (with the same name as the outgoing resource). In addition, when doing that, I've found that when scaling down the number of partitions in a resource by recreating the resource, the new resource will correctly show that it has the scaled down number of partitions, but Helix will still attempt to assign the original larger number of partitions even though the resource was completely recreated.
Helix Configuration is attached for the cluster and resource
To Reproduce
Expected behavior
When I edit the resource configuration it should automatically handle removing the partitions from participants and remove them entirely from the cluster. Also if I recreate a resource thats exactly the same as one I just deleted, just with a smaller number of partitions, the cluster should correctly assign the right number of partitions, not the older, incorrect number
Additional context
Configuration:
Helix-Config.txt
The text was updated successfully, but these errors were encountered: