Skip to content

Commit

Permalink
Docs: fix default value of limits (#1072)
Browse files Browse the repository at this point in the history
  • Loading branch information
xiaogaozi authored Aug 5, 2024
1 parent ce4e96a commit 1750af0
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 4 deletions.
3 changes: 2 additions & 1 deletion docs/en/guide/resource-optimization.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,11 @@ Kubernetes allows much easier and efficient resource utilization, in JuiceFS CSI

Every application pod that uses JuiceFS PV requires a running mount pod (reused for pods using a same PV), thus configuring proper resource definition for mount pod can effectively optimize resource usage. Read [Resource Management for Pods and Containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers) to learn about pod resource requests and limits.

Under the default settings, JuiceFS mount pod resource `requests` is 1 CPU and 1GiB memory, resource `limits` is 2 CPU and 5GiB memory, this might not be the perfect setup for you since JuiceFS is used in so many different scenarios, you should make adjustments to fit the actual resource usage:
Under the default settings, JuiceFS mount pod resource `requests` is 1 CPU and 1GiB memory, resource `limits` is 5 CPU and 5GiB memory, this might not be the perfect setup for you since JuiceFS is used in so many different scenarios, you should make adjustments to fit the actual resource usage:

* If actual usage is lower, e.g. mount pod uses only 0.1 CPU, 100MiB memory, then you should match the resources `requests` to the actual usage, to avoid wasting resources, or worse, mount pod not being able to schedule to due overly large resource `requests`, this might also cause pod preemptions which should be absolutely avoided in a production environment. For resource `limits`, you should also configure a reasonably larger value, so that the mount pod can deal with temporary load increases.
* If actual usage is higher, e.g. 2 CPU, 2GiB memory, even though the default `requests` allows for its scheduling, things are risky because mount pod is using more resources than it declares, this is called overcommitment and constant overcommitment can cause all sorts of stability issues like CPU throttling and OOM. So under this circumstance, you should also adjust requests and limits according to the actual usage.
* If high performance is required in actual scenarios, but `limits` is set too small, it will have a great negative impact on performance.

If you already have [Kubernetes Metrics Server](https://github.com/kubernetes-sigs/metrics-server) installed, use commands like these to conveniently check actual resource usage for CSI Driver components:

Expand Down
7 changes: 4 additions & 3 deletions docs/zh_cn/guide/resource-optimization.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,11 @@ Kubernetes 的一大好处就是促进资源充分利用,在 JuiceFS CSI 驱

每一个使用着 JuiceFS PV 的容器,都对应着一个 Mount Pod(会智能匹配和复用),因此为 Mount Pod 配置合理的资源声明,将是最有效的优化资源占用的手段。关于配置资源请求(`request`)和约束(`limit`),可以详读 [Kubernetes 官方文档](https://kubernetes.io/zh-cn/docs/concepts/configuration/manage-resources-containers),此处不赘述。

JuiceFS Mount Pod 的 `requests` 默认为 1 CPU 和 1GiB Memory,`limits` 默认为 2 CPU 和 5GiB Memory。考虑到 JuiceFS 的使用场景多种多样,1C1G 的资源请求可能不一定适合你的集群,比方说:
JuiceFS Mount Pod 的 `requests` 默认为 1 CPU 和 1GiB Memory,`limits` 默认为 5 CPU 和 5GiB Memory。考虑到 JuiceFS 的使用场景多种多样,1C1G 的资源请求可能不一定适合你的集群,比方说:

* 实际场景下用量极低,比如 Mount Pod 只使用了 0.1 CPU、100MiB Memory,那么你应该尊重实际监控数据,将资源请求调整为 0.1 CPU,100MiB Memory,避免过大的 `requests` 造成资源闲置,甚至导致容器拒绝启动,或者抢占其他应用容器(Preemption)。对于 `limits`,你也可以根据实际监控数据,调整为一个大于 `requests` 的数值,允许突发瞬时的资源占用上升。
* 实际场景下用量更高,比方说 2 CPU、2GiB 内存,此时虽然 1C1G 的默认 `requests` 允许容器调度到节点上,但实际资源占用高于 `requests`,这便是「资源超售」(Overcommitment),严重的超售会影响集群稳定性,让节点出现各种资源挤占的问题,比如 CPU Throttle、OOM。因此这种情况下,你也应该根据实际用量,调整 `requests``limits`
* 实际场景下用量极低,比如 Mount Pod 只使用了 0.1 CPU、100MiB Memory,那么你应该尊重实际监控数据,将资源请求调整为 0.1 CPU,100MiB Memory,避免过大的 `requests` 造成资源闲置,甚至导致容器拒绝启动,或者抢占其他应用容器(Preemption)。对于 `limits`,你也可以根据实际监控数据,调整为一个大于 `requests` 的数值,允许突发瞬时的资源占用上升;
* 实际场景下用量更高,比方说 2 CPU、2GiB 内存,此时虽然 1C1G 的默认 `requests` 允许容器调度到节点上,但实际资源占用高于 `requests`,这便是「资源超售」(Overcommitment),严重的超售会影响集群稳定性,让节点出现各种资源挤占的问题,比如 CPU Throttle、OOM。因此这种情况下,你也应该根据实际用量,调整 `requests``limits`
* 如果实际场景中需要很高的性能,但是 `limits` 设置得太小,会对性能产生很大的负面影响。

如果你安装了 [Kubernetes Metrics Server](https://github.com/kubernetes-sigs/metrics-server),可以方便地用类似下方命令查看 CSI 驱动组件的实际资源占用:

Expand Down

0 comments on commit 1750af0

Please sign in to comment.