You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/content/guides/huge-pages.md
+15-3Lines changed: 15 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,11 @@ When you enable Huge Pages in your Kube cluster, it is important to keep a few t
18
18
2. How many pages were preallocated? Are there any other applications or processes that will be using these pages?
19
19
3. Which nodes have Huge Pages enabled? Is it possible that more nodes will be added to the cluster? If so, will they also have Huge Pages enabled?
20
20
21
-
Once Huge Pages are enabled on one or more nodes in your Kubernetes cluster, you can tell Postgres to start using them by adding some configuration to your PostgresCluster spec (Warning: setting/changing this setting will cause your database to restart):
21
+
Once Huge Pages are enabled on one or more nodes in your Kubernetes cluster, you can tell Postgres to start using them by adding some configuration to your PostgresCluster spec:
22
+
23
+
{{% notice warning %}}
24
+
Warning: setting/changing this setting will cause your database to restart.
@@ -40,7 +44,11 @@ This is where it is important to know the size and the number of Huge Pages avai
40
44
41
45
Note: In the `instances.#.resources` spec, there are `limits` and `requests`. If a request value is not specified (like in the example above), it is presumed to be equal to the limit value. For Huge Pages, the request value must always be equal to the limit value, therefore, it is perfectly acceptable to just specify it in the `limits` section.
42
46
43
-
Note: Postgres uses the system default size by default. This means that if there are multiple sizes of Huge Pages available on the node(s) and you attempt to use a size in your PostgresCluster that is not the system default, it will fail. To use a non-default size you will need to tell Postgres the size to use with the `huge_page_size` variable, which can be set via dynamic configuration (Warning: setting/changing this parameter will cause your database to restart):
47
+
Note: Postgres uses the system default size by default. This means that if there are multiple sizes of Huge Pages available on the node(s) and you attempt to use a size in your PostgresCluster that is not the system default, it will fail. To use a non-default size you will need to tell Postgres the size to use with the `huge_page_size` variable, which can be set via dynamic configuration:
48
+
49
+
{{% notice warning %}}
50
+
Warning: setting/changing this parameter will cause your database to restart.
51
+
{{% /notice %}}
44
52
45
53
```yaml
46
54
patroni:
@@ -60,7 +68,11 @@ The only dilemma that remains is that those whose PostgresClusters are not using
60
68
61
69
1. Use Huge Pages! You're already running your Postgres containers on nodes that have Huge Pages enabled, why not use them in Postgres?
62
70
2. Create nodes in your Kubernetes cluster that don't have Huge Pages enabled, and put your Postgres containers on those nodes.
63
-
3. If for some reason you cannot use Huge Pages in Postgres, but you must run your Postgres containers on nodes that have Huge Pages enabled, you can manually set the `shared_buffers` parameter back to a good setting using dynamic configuration (Warning: setting/changing this parameter will cause your database to restart):
71
+
3. If for some reason you cannot use Huge Pages in Postgres, but you must run your Postgres containers on nodes that have Huge Pages enabled, you can manually set the `shared_buffers` parameter back to a good setting using dynamic configuration:
72
+
73
+
{{% notice warning %}}
74
+
Warning: setting/changing this parameter will cause your database to restart.
0 commit comments