-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rewrite service.sls to be able to use it with a cluster manager like Pacemaker #48
Comments
I could create a PR if you want. |
Looks a bit convoluted to me, and doesn't seem to be taking systemd in to account? |
service.(running|dead|enabled): also works with systemd. Regarding convoluted:
and then use variables instead of that long salt['pillar.get']... doesn't look that bad anymore :).
What do you think? I could first assign all service related stuff to variables and then it would look similar. |
Also service.dead doesn't support reload option. I would fix that before creating a PR. |
It's the whole file.replace deal I was targeting. Anyway, best way would be:
Then service can be used to always reload, set autostart to the value of service:autostart and start or kill the service depending on service:run. This way, the terminology is somewhat more explicit, which is important since we are deviating from the normal process (which is: let the OS decide and start+enable by default). I think I'll patch in map.jina and defaults.yaml so this can be overridden in a more standardised way using the lookup sub-pillar. |
these are the 2 rendered sls snippets which would make 99% of people happy. non-cluster (that is already working):
cluster(not possible yet):
The changes to the file.replace /etc/default/haproxy part were only done to make the regex more failsafe (If somebody puts blanks/tabs before/after ENABLED |
@johnkeates: Will you prepare something on that? Remark:
|
Good point, I guess it stays ;-) |
See this commit and comment:
hoonetorg@cc242d7
Very short the current situation:
enable: True -> haproxy starts at boot and runs
enable: False -> haproxy doesn't start at boot and will be stopped at salt run if started.
Clustermanager needs:
The text was updated successfully, but these errors were encountered: