-
Notifications
You must be signed in to change notification settings - Fork 311
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changing configuration about containerd runtime on kubernetes looks weird #598
Comments
Further findings are below.
Then, should i always restart container toolkit after update kubernetes update? |
@SeungminHeo yes, if |
Thank you for your explanation 👍 Our team has to follow your instruction. |
1. Quick Debug Information
2. Issue or feature description
I deployed GPU Operator on kubernetes cluster generated by kubespray with containerd.
I didn't pre-install any driver or container toolkit. I deployed them with GPU Operator
What i want to know is that whether updating on /etc/containerd/config.toml can affect running pods and provisioning pods or not.
After updating my kubernetes setting (especially docker registry options), all the contents in /etc/containerd/config.toml was overriden . All contents related with nvidia container runtime disappeared. Fortunately, running pods were not affected. And newly provisioned pods were also not affected.
If I restart nvidia toolkit daemonset, contents will be changed to nvidia container runtime related, but I can't figure out this is normal situation. Is it normal and okay? Can I leave config.toml as it is or should I update it with restarting container toolkit?
3. Steps to reproduce the issue
4. Information to attach (optional if deemed irrelevant)
Before
After (overriden)
The text was updated successfully, but these errors were encountered: