diff --git a/docs/cloud/features/04_automatic_scaling/01_overview.md b/docs/cloud/features/04_automatic_scaling/01_overview.md
new file mode 100644
index 00000000000..9ff93f26eca
--- /dev/null
+++ b/docs/cloud/features/04_automatic_scaling/01_overview.md
@@ -0,0 +1,50 @@
+---
+sidebar_position: 1
+sidebar_label: 'Overview'
+slug: /manage/scaling
+description: 'Overview of automatic scaling in ClickHouse Cloud'
+keywords: ['autoscaling', 'auto scaling', 'scaling', 'horizontal', 'vertical', 'bursts']
+title: 'Automatic scaling'
+doc_type: 'guide'
+---
+
+import ScalePlanFeatureBadge from '@theme/badges/ScalePlanFeatureBadge'
+
+# Automatic scaling
+
+Scaling is the ability to adjust available resources to meet client demands. Scale and Enterprise (with standard 1:4 profile) tier services can be scaled horizontally by calling an API programmatically, or changing settings on the UI to adjust system resources. These services can also be **autoscaled** vertically to meet application demands.
+
+
+
+:::note
+Scale and Enterprise tiers support both single and multi-replica services, whereas, the Basic tier supports only single replica services. Single replica services are meant to be fixed in size and don't allow vertical or horizontal scaling. You can upgrade to the Scale or Enterprise tier to scale your services.
+:::
+
+## How scaling works in ClickHouse Cloud {#how-scaling-works-in-clickhouse-cloud}
+
+Currently, ClickHouse Cloud supports vertical autoscaling and manual horizontal scaling for Scale tier services.
+
+For Enterprise tier services scaling works as follows:
+
+- **Horizontal scaling**: Manual horizontal scaling will be available across all standard and custom profiles on the enterprise tier.
+- **Vertical scaling**:
+ - Standard profiles (1:4) will support vertical autoscaling.
+ - Custom profiles (`highMemory` and `highCPU`) don't support vertical autoscaling or manual vertical scaling. However, these services can be scaled vertically by contacting support.
+
+:::note
+Scaling in ClickHouse Cloud happens in what we call a ["Make Before Break" (MBB)](/cloud/features/mbb) approach.
+This adds one or more replicas of the new size before removing the old replicas, preventing any loss of capacity during scaling operations.
+By eliminating the gap between removing existing replicas and adding new ones, MBB creates a more seamless and less disruptive scaling process.
+It is especially beneficial in scale-up scenarios, where high resource utilization triggers the need for additional capacity, since removing replicas prematurely would only exacerbate the resource constraints.
+As part of this approach, we wait up to an hour to let any existing queries complete on the older replicas before removing them.
+This balances the need for existing queries to complete, while at the same time ensuring that older replicas don't linger around for too long.
+:::
+
+## Learn more {#learn-more}
+
+- [Vertical autoscaling](/cloud/features/autoscaling/vertical) — Automatic CPU and memory scaling based on usage
+- [Horizontal scaling](/cloud/features/autoscaling/horizontal) — Manual replica scaling via API or UI
+- [Make Before Break (MBB)](/cloud/features/mbb) — How ClickHouse Cloud performs seamless scaling operations
+- [Automatic idling](/cloud/features/autoscaling/idling) — Cost savings through automatic service suspension
+- [Scaling recommendations](/cloud/features/autoscaling/scaling-recommendations) — Understanding scaling recommendations
+- [Scheduled scaling](/cloud/features/autoscaling/scaling-recommendations) — Understanding the Scheduled Scaling feature, which lets you define exactly when your service should scale up or down, independent of real-time metrics
diff --git a/docs/cloud/features/04_automatic_scaling/02_vertical_autoscaling.md b/docs/cloud/features/04_automatic_scaling/02_vertical_autoscaling.md
new file mode 100644
index 00000000000..daaad4faebc
--- /dev/null
+++ b/docs/cloud/features/04_automatic_scaling/02_vertical_autoscaling.md
@@ -0,0 +1,68 @@
+---
+sidebar_position: 2
+sidebar_label: 'Vertical autoscaling'
+slug: /cloud/features/autoscaling/vertical
+description: 'Configuring vertical autoscaling in ClickHouse Cloud'
+keywords: ['autoscaling', 'auto scaling', 'vertical', 'scaling', 'CPU', 'memory']
+title: 'Vertical autoscaling'
+doc_type: 'guide'
+---
+
+import Image from '@theme/IdealImage';
+import auto_scaling from '@site/static/images/cloud/manage/AutoScaling.png';
+import ScalePlanFeatureBadge from '@theme/badges/ScalePlanFeatureBadge'
+
+## Vertical auto scaling {#vertical-auto-scaling}
+
+
+
+Scale and Enterprise tier services support autoscaling based on CPU and memory usage. Service usage is constantly monitored over a lookback window to make scaling decisions. If the usage rises above or falls below certain thresholds, the service is scaled appropriately to match the demand.
+
+## CPU-based Scaling {#cpu-based-scaling}
+
+CPU Scaling is based on target tracking which calculates the exact CPU allocation needed to keep utilization at a target level. A scaling action is only triggered if current CPU utilization falls outside a defined band:
+
+| Parameter | Value | Meaning |
+|---|---|---|
+| Target utilization | 53% | The utilization level ClickHouse aims to maintain |
+| High watermark | 75% | Triggers scale-up when CPU exceeds this threshold |
+| Low watermark | 37.5% | Triggers scale-down when CPU falls below this threshold |
+
+The recommender evaluates CPU utilization based on historical usage, and determines a recommended CPU size using this formula:
+```text
+recommended_cpu = max_cpu_usage / target_utilization
+```
+
+If the CPU utilization is between 37.5%–75% of allocated capacity, no scaling action is taken. Outside that band, the recommender computes the exact size needed to land back at 53% utilization, and the service is scaled accordingly.
+
+### Example {#cpu-scaling-example}
+
+A service allocated 4 vCPU experiences a spike to 3.8 vCPU usage (~95% utilization), crossing the 75% high watermark. The recommender calculates: `3.8 / 0.53 ≈ 7.2 vCPU`, and rounds up to the next available size (8 vCPU). Once load subsides and usage drops below 37.5% (1.5 vCPU), the recommender scales back down proportionally.
+
+## Memory-based Scaling {#memory-based-scaling}
+
+Memory-based auto-scaling scales the cluster to 125% of the maximum memory usage, or up to 150% if OOM (out of memory) errors are encountered.
+
+## Scaling Decision {#scaling-decision}
+
+The larger of the CPU or memory recommendation is picked, and CPU and memory allocated to the service are scaled in lockstep increments of 1 CPU and 4 GiB memory.
+
+## Configuring vertical auto scaling {#configuring-vertical-auto-scaling}
+
+The scaling of ClickHouse Cloud Scale or Enterprise services can be adjusted by organization members with the **Admin** role. To configure vertical autoscaling, go to the **Settings** tab for your service and adjust the minimum and maximum memory, along with CPU settings as shown below.
+
+:::note
+Single replica services can't be scaled for all tiers.
+:::
+
+
+
+Set the **Maximum memory** for your replicas at a higher value than the **Minimum memory**. The service will then scale as needed within those bounds. These settings are also available during the initial service creation flow. Each replica in your service will be allocated the same memory and CPU resources.
+
+You can also choose to set these values the same, essentially "pinning" the service to a specific configuration. Doing so will immediately force scaling to the desired size you picked.
+
+It's important to note that this will disable any auto scaling on the cluster, and your service won't be protected against increases in CPU or memory usage beyond these settings.
+
+:::note
+For Enterprise tier services, standard 1:4 profiles will support vertical autoscaling. Custom profiles don’t support vertical autoscaling or manual vertical scaling. However, these services can be scaled vertically by contacting support.
+:::
diff --git a/docs/cloud/features/04_automatic_scaling/03_horizontal_scaling.md b/docs/cloud/features/04_automatic_scaling/03_horizontal_scaling.md
new file mode 100644
index 00000000000..ed0fcf65499
--- /dev/null
+++ b/docs/cloud/features/04_automatic_scaling/03_horizontal_scaling.md
@@ -0,0 +1,77 @@
+---
+sidebar_position: 3
+sidebar_label: 'Horizontal scaling'
+slug: /cloud/features/autoscaling/horizontal
+description: 'Manual horizontal scaling in ClickHouse Cloud'
+keywords: ['horizontal scaling', 'scaling', 'replicas', 'manual scaling', 'spikes', 'bursts']
+title: 'Horizontal scaling'
+doc_type: 'guide'
+---
+
+import Image from '@theme/IdealImage';
+import scaling_patch_request from '@site/static/images/cloud/manage/scaling-patch-request.png';
+import scaling_patch_response from '@site/static/images/cloud/manage/scaling-patch-response.png';
+import scaling_configure from '@site/static/images/cloud/manage/scaling-configure.png';
+import scaling_memory_allocation from '@site/static/images/cloud/manage/scaling-memory-allocation.png';
+import ScalePlanFeatureBadge from '@theme/badges/ScalePlanFeatureBadge'
+
+## Manual horizontal scaling {#manual-horizontal-scaling}
+
+
+
+You can use ClickHouse Cloud [public APIs](https://clickhouse.com/docs/cloud/manage/api/swagger#/paths/~1v1~1organizations~1:organizationId~1services~1:serviceId~1scaling/patch) to scale your service by updating the scaling settings for the service or adjust the number of replicas from the cloud console.
+
+**Scale** and **Enterprise** tiers also support single-replica services. Services once scaled out, can be scaled back in to a minimum of a single replica. Note that single replica services have reduced availability and aren't recommended for production usage.
+
+:::note
+Services can scale horizontally to a maximum of 20 replicas. If you need additional replicas, please contact our support team.
+:::
+
+### Horizontal scaling via API {#horizontal-scaling-via-api}
+
+To horizontally scale a cluster, issue a `PATCH` request via the API to adjust the number of replicas. The screenshots below show an API call to scale out a `3` replica cluster to `6` replicas, and the corresponding response.
+
+
+
+*`PATCH` request to update `numReplicas`*
+
+
+
+*Response from `PATCH` request*
+
+If you issue a new scaling request or multiple requests in succession, while one is already in progress, the scaling service will ignore the intermediate states and converge on the final replica count.
+
+### Horizontal scaling via UI {#horizontal-scaling-via-ui}
+
+To scale a service horizontally from the UI, you can adjust the number of replicas for the service on the **Settings** page.
+
+
+
+*Service scaling settings from the ClickHouse Cloud console*
+
+Once the service has scaled, the metrics dashboard in the cloud console should show the correct allocation to the service. The screenshot below shows the cluster having scaled to total memory of `96 GiB`, which is `6` replicas, each with `16 GiB` memory allocation.
+
+
+
+## Handling spikes in workload {#handling-bursty-workloads}
+
+If you have an upcoming expected spike in your workload, you can use the
+[ClickHouse Cloud API](/cloud/manage/api/api-overview) to
+preemptively scale up your service to handle the spike and scale it down once
+the demand subsides.
+
+To understand the current CPU cores and memory in use for
+each of your replicas, you can run the query below:
+
+```sql
+SELECT *
+FROM clusterAllReplicas('default', view(
+ SELECT
+ hostname() AS server,
+ anyIf(value, metric = 'CGroupMaxCPU') AS cpu_cores,
+ formatReadableSize(anyIf(value, metric = 'CGroupMemoryTotal')) AS memory
+ FROM system.asynchronous_metrics
+))
+ORDER BY server ASC
+SETTINGS skip_unavailable_shards = 1
+```
diff --git a/docs/cloud/features/04_infrastructure/automatic_scaling/02_make_before_break.md b/docs/cloud/features/04_automatic_scaling/04_make_before_break.md
similarity index 100%
rename from docs/cloud/features/04_infrastructure/automatic_scaling/02_make_before_break.md
rename to docs/cloud/features/04_automatic_scaling/04_make_before_break.md
diff --git a/docs/cloud/features/04_automatic_scaling/05_automatic_idling.md b/docs/cloud/features/04_automatic_scaling/05_automatic_idling.md
new file mode 100644
index 00000000000..aca8723c3d6
--- /dev/null
+++ b/docs/cloud/features/04_automatic_scaling/05_automatic_idling.md
@@ -0,0 +1,30 @@
+---
+sidebar_position: 5
+sidebar_label: 'Automatic idling'
+slug: /cloud/features/autoscaling/idling
+description: 'Automatic idling and adaptive idling in ClickHouse Cloud'
+keywords: ['idling', 'automatic idling', 'adaptive idling', 'cost savings', 'pause']
+title: 'Automatic idling'
+doc_type: 'guide'
+---
+
+## Automatic idling {#automatic-idling}
+In the **Settings** page, you can also choose whether or not to allow automatic idling of your service when it is inactive for a certain duration (i.e. when the service isn't executing any user-submitted queries). Automatic idling reduces the cost of your service, as you're not billed for compute resources when the service is paused.
+
+### Adaptive Idling {#adaptive-idling}
+ ClickHouse Cloud implements adaptive idling to prevent disruptions while optimizing cost savings. The system evaluates several conditions before transitioning a service to idle. Adaptive idling overrides the idling duration setting when any of the below listed conditions are met:
+- When the number of parts exceeds the maximum idle parts threshold (default: 10,000), the service isn't idled so that background maintenance can continue
+- When there are ongoing merge operations, the service isn't idled until those merges complete to avoid interrupting critical data consolidation
+- Additionally, the service also adapts idle timeouts based on server initialization time:
+ - If server initialization time is less than 15 minutes, no adaptive timeout is applied and the customer-configured default idle timeout is used
+ - If server initialization time is between 15 and 30 minutes, the idle timeout is set to 15 minutes
+ - If server initialization time is between 30 and 60 minutes, the idle timeout is set to 30 minutes.
+ - If server initialization time is more than 60 minutes, the idle timeout is set to 1 hour
+
+:::note
+The service may enter an idle state where it suspends refreshes of [refreshable materialized views](/materialized-view/refreshable-materialized-view), consumption from [S3Queue](/engines/table-engines/integrations/s3queue), and scheduling of new merges. Existing merge operations will complete before the service transitions to the idle state. To ensure continuous operation of refreshable materialized views and S3Queue consumption, disable the idle state functionality.
+:::
+
+:::danger When not to use automatic idling
+Use automatic idling only if your use case can handle a delay before responding to queries, because when a service is paused, connections to the service will time out. Automatic idling is ideal for services that are used infrequently and where a delay can be tolerated. It isn't recommended for services that power customer-facing features that are used frequently.
+:::
diff --git a/docs/cloud/features/04_automatic_scaling/06_scaling_recommendations.md b/docs/cloud/features/04_automatic_scaling/06_scaling_recommendations.md
new file mode 100644
index 00000000000..2105f192d14
--- /dev/null
+++ b/docs/cloud/features/04_automatic_scaling/06_scaling_recommendations.md
@@ -0,0 +1,21 @@
+---
+sidebar_position: 6
+sidebar_label: 'Scaling recommendations'
+slug: /cloud/features/autoscaling/scaling-recommendations
+description: 'Understanding scaling recommendations in ClickHouse Cloud'
+keywords: ['scaling recommendations', 'recommender', '2-window', 'autoscaling', 'optimization']
+title: 'Scaling recommendations'
+doc_type: 'guide'
+---
+
+ClickHouse Cloud automatically adjusts CPU and memory resources for each service based on real-time usage — ensuring stable performance while minimizing resource wastage. To balance responsiveness with stability, we utilize a two-window recommender system that monitors utilization over both a short 3-hour window and a longer 30-hour window. This allows us to react quickly to changes and also make decisions based on longer-term trends.
+
+When usage increases, the system references the long window so it can scale up in a single, decisive step to the highest observed load within the past 30 hours. This approach minimizes repeated scale events. Conversely, when traffic declines, the short window guides a quick scale-down within about three hours, conserving resources.
+
+By integrating these two perspectives, the recommender intelligently balances responsiveness with stability.
+
+## Benefits {#benefits}
+
+- **Cost optimization:** Right-size your services to avoid paying for unused resources while maintaining performance.
+- **Proactive scaling:** Get ahead of potential performance issues before they impact your workloads.
+- **Balanced approach:** The 2-window design prevents over-provisioning from transient spikes while still ensuring adequate headroom for real demand.
\ No newline at end of file
diff --git a/docs/cloud/features/04_automatic_scaling/07_scheduled_scaling.md b/docs/cloud/features/04_automatic_scaling/07_scheduled_scaling.md
new file mode 100644
index 00000000000..210af9cf597
--- /dev/null
+++ b/docs/cloud/features/04_automatic_scaling/07_scheduled_scaling.md
@@ -0,0 +1,54 @@
+---
+sidebar_position: 7
+sidebar_label: 'Scheduled scaling'
+slug: /cloud/features/autoscaling/scheduled-scaling
+description: 'Article discussing the Scheduled Scaling feature in ClickHouse Cloud'
+keywords: ['scheduled scaling']
+title: 'Scheduled scaling'
+doc_type: 'guide'
+---
+
+import PrivatePreviewBadge from '@theme/badges/PrivatePreviewBadge';
+import Image from '@theme/IdealImage';
+import scheduled_scaling_1 from '@site/static/images/cloud/features/autoscaling/scheduled-scaling-1.png';
+import scheduled_scaling_2 from '@site/static/images/cloud/features/autoscaling/scheduled-scaling-2.png';
+
+
+
+ClickHouse Cloud services automatically scale based on CPU and memory utilization, but many workloads follow predictable patterns — daily ingestion spikes, batch jobs that run overnight, or traffic that drops sharply on weekends. For these use cases, Scheduled Scaling lets you define exactly when your service should scale up or down, independent of real-time metrics.
+
+With Scheduled Scaling, you configure a set of time-based rules directly in the ClickHouse Cloud console. Each rule specifies a time, a recurrence (daily, weekly, or custom), and the target size — either the number of replicas (horizontal) or the memory tier (vertical). At the scheduled time, ClickHouse Cloud automatically applies the change, so your service is sized appropriately before demand arrives rather than reacting after the fact.
+
+This is distinct from metric-based autoscaling, which responds dynamically to CPU and memory pressure. Scheduled Scaling is deterministic: you know exactly when the scaling will happen and to what size. The two approaches are complementary — a service can have a baseline scaling schedule and still benefit from autoscaling within that window if workloads fluctuate unexpectedly.
+
+Scheduled Scaling is currently available in **Private Preview**. To enable it for your organization, contact the ClickHouse support team.
+
+## Setting up a scaling schedule {#setting-up-a-scaling-schedule}
+
+To configure a schedule, navigate to your service in the ClickHouse Cloud console and go to settings. From there, select **Schedule Override** and add a new rule.
+
+
+
+
+
+Each rule requires:
+
+- **Time:** When the scaling action should occur (in your local timezone)
+- **Recurrence:** How often the rule repeats (e.g. every weekday, every Sunday)
+- **Target size:** The number of replicas or memory allocation to scale to
+
+Multiple rules can be combined to form a full weekly schedule. For example, you might scale out to 5 replicas every weekday at 6 AM and scale back to 2 replicas at 8 PM.
+
+## Use cases {#use-cases}
+
+**Batch and ETL workloads:** Scale up before a nightly ingest job runs and scale back down once it completes, avoiding over-provisioning during idle daytime hours.
+
+**Predictable traffic patterns:** Services with consistent peak hours (e.g. business-hours query traffic) can be pre-scaled to handle load before it arrives, rather than waiting for autoscaling to react.
+
+**Weekend scale-down:** Reduce replica count or memory tier over weekends when demand is lower, then restore capacity before the Monday morning surge.
+
+**Cost control:** For teams managing ClickHouse Cloud spend, scheduled scale-downs during known low-utilization periods can meaningfully reduce resource consumption without any manual intervention.
+
+:::note
+A scheduled scaling action and a concurrent autoscaling recommendation may interact — the schedule takes precedence at its trigger time.
+:::
diff --git a/docs/cloud/features/04_infrastructure/automatic_scaling/_category_.json b/docs/cloud/features/04_automatic_scaling/_category_.json
similarity index 97%
rename from docs/cloud/features/04_infrastructure/automatic_scaling/_category_.json
rename to docs/cloud/features/04_automatic_scaling/_category_.json
index 8766f31a1a3..6ab54ef17e5 100644
--- a/docs/cloud/features/04_infrastructure/automatic_scaling/_category_.json
+++ b/docs/cloud/features/04_automatic_scaling/_category_.json
@@ -2,4 +2,4 @@
"label": "Automatic Scaling",
"collapsible": true,
"collapsed": true
-}
\ No newline at end of file
+}
diff --git a/docs/cloud/features/04_infrastructure/automatic_scaling/01_auto_scaling.md b/docs/cloud/features/04_infrastructure/automatic_scaling/01_auto_scaling.md
deleted file mode 100644
index 793450346eb..00000000000
--- a/docs/cloud/features/04_infrastructure/automatic_scaling/01_auto_scaling.md
+++ /dev/null
@@ -1,165 +0,0 @@
----
-sidebar_position: 1
-sidebar_label: 'Automatic scaling'
-slug: /manage/scaling
-description: 'Configuring automatic scaling in ClickHouse Cloud'
-keywords: ['autoscaling', 'auto scaling', 'scaling', 'horizontal', 'vertical', 'bursts']
-title: 'Automatic scaling'
-doc_type: 'guide'
----
-
-import Image from '@theme/IdealImage';
-import auto_scaling from '@site/static/images/cloud/manage/AutoScaling.png';
-import scaling_patch_request from '@site/static/images/cloud/manage/scaling-patch-request.png';
-import scaling_patch_response from '@site/static/images/cloud/manage/scaling-patch-response.png';
-import scaling_configure from '@site/static/images/cloud/manage/scaling-configure.png';
-import scaling_memory_allocation from '@site/static/images/cloud/manage/scaling-memory-allocation.png';
-import ScalePlanFeatureBadge from '@theme/badges/ScalePlanFeatureBadge'
-
-# Automatic scaling
-
-Scaling is the ability to adjust available resources to meet client demands. Scale and Enterprise (with standard 1:4 profile) tier services can be scaled horizontally by calling an API programmatically, or changing settings on the UI to adjust system resources. These services can also be **autoscaled** vertically to meet application demands.
-
-
-
-:::note
-Scale and Enterprise tiers supports both single and multi-replica services, whereas, Basic tier supports only single replica services. Single replica services are meant to be fixed in size and don't allow vertical or horizontal scaling. You can upgrade to the Scale or Enterprise tier to scale your services.
-:::
-
-## How scaling works in ClickHouse Cloud {#how-scaling-works-in-clickhouse-cloud}
-
-Currently, ClickHouse Cloud supports vertical autoscaling and manual horizontal scaling for Scale tier services.
-
-For Enterprise tier services scaling works as follows:
-
-- **Horizontal scaling**: Manual horizontal scaling will be available across all standard and custom profiles on the enterprise tier.
-- **Vertical scaling**:
- - Standard profiles (1:4) will support vertical autoscaling.
- - Custom profiles (`highMemory`) don't support vertical autoscaling or manual vertical scaling. However, these services can be scaled vertically by contacting support.
-
-:::note
-Scaling in ClickHouse Cloud happens in what we call a ["Make Before Break" (MBB)](/cloud/features/mbb) approach.
-This adds one or more replicas of the new size before removing the old replicas, preventing any loss of capacity during scaling operations.
-By eliminating the gap between removing existing replicas and adding new ones, MBB creates a more seamless and less disruptive scaling process.
-It is especially beneficial in scale-up scenarios, where high resource utilization triggers the need for additional capacity, since removing replicas prematurely would only exacerbate the resource constraints.
-As part of this approach, we wait up to an hour to let any existing queries complete on the older replicas before removing them.
-This balances the need for existing queries to complete, while at the same time ensuring that older replicas don't linger around for too long.
-:::
-
-### Vertical auto scaling {#vertical-auto-scaling}
-
-
-
-Scale and Enterprise services support autoscaling based on CPU and memory usage. We constantly monitor the historical usage of a service over a lookback window (spanning the past 30 hours) to make scaling decisions. If the usage rises above or falls below certain thresholds, we scale the service appropriately to match the demand.
-
-For non-MBB services, CPU-based autoscaling kicks in when CPU usage crosses an upper threshold in the range of 50-75% (actual threshold depends on the size of the cluster). At this point, CPU allocation to the cluster is doubled. If CPU usage falls below half of the upper threshold (for instance, to 25% in case of a 50% upper threshold), CPU allocation is halved.
-
-For services already utilizing the MBB scaling approach, scaling up happens at a CPU threshold of 75%, and scale down happens at half of that threshold, or 37.5%.
-
-Memory-based auto-scaling scales the cluster to 125% of the maximum memory usage, or up to 150% if OOM (out of memory) errors are encountered.
-
-The **larger** of the CPU or memory recommendation is picked, and CPU and memory allocated to the service are scaled in lockstep increments of `1` CPU and `4 GiB` memory.
-
-### Configuring vertical auto scaling {#configuring-vertical-auto-scaling}
-
-The scaling of ClickHouse Cloud Scale or Enterprise services can be adjusted by organization members with the **Admin** role. To configure vertical autoscaling, go to the **Settings** tab for your service and adjust the minimum and maximum memory, along with CPU settings as shown below.
-
-:::note
-Single replica services can't be scaled for all tiers.
-:::
-
-
-
-Set the **Maximum memory** for your replicas at a higher value than the **Minimum memory**. The service will then scale as needed within those bounds. These settings are also available during the initial service creation flow. Each replica in your service will be allocated the same memory and CPU resources.
-
-You can also choose to set these values the same, essentially "pinning" the service to a specific configuration. Doing so will immediately force scaling to the desired size you picked.
-
-It's important to note that this will disable any auto scaling on the cluster, and your service won't be protected against increases in CPU or memory usage beyond these settings.
-
-:::note
-For Enterprise tier services, standard 1:4 profiles will support vertical autoscaling.
-Custom profiles won't support vertical autoscaling or manual vertical scaling at launch.
-However, these services can be scaled vertically by contacting support.
-:::
-
-## Manual horizontal scaling {#manual-horizontal-scaling}
-
-
-
-You can use ClickHouse Cloud [public APIs](https://clickhouse.com/docs/cloud/manage/api/swagger#/paths/~1v1~1organizations~1:organizationId~1services~1:serviceId~1scaling/patch) to scale your service by updating the scaling settings for the service or adjust the number of replicas from the cloud console.
-
-**Scale** and **Enterprise** tiers also support single-replica services. Services once scaled out, can be scaled back in to a minimum of a single replica. Note that single replica services have reduced availability and aren't recommended for production usage.
-
-:::note
-Services can scale horizontally to a maximum of 20 replicas. If you need additional replicas, please contact our support team.
-:::
-
-### Horizontal scaling via API {#horizontal-scaling-via-api}
-
-To horizontally scale a cluster, issue a `PATCH` request via the API to adjust the number of replicas. The screenshots below show an API call to scale out a `3` replica cluster to `6` replicas, and the corresponding response.
-
-
-
-*`PATCH` request to update `numReplicas`*
-
-
-
-*Response from `PATCH` request*
-
-If you issue a new scaling request (or multiple requests in succession) while one is already in progress, the scaling service ignores intermediate states and converges on the final replica count.
-
-### Horizontal scaling via UI {#horizontal-scaling-via-ui}
-
-To scale a service horizontally from the UI, you can adjust the number of replicas for the service on the **Settings** page.
-
-
-
-*Service scaling settings from the ClickHouse Cloud console*
-
-Once the service has scaled, the metrics dashboard in the cloud console should show the correct allocation to the service. The screenshot below shows the cluster having scaled to total memory of `96 GiB`, which is `6` replicas, each with `16 GiB` memory allocation.
-
-
-
-## Automatic idling {#automatic-idling}
-In the **Settings** page, you can also choose whether or not to allow automatic idling of your service when it is inactive for a certain duration (i.e. when the service isn't executing any user-submitted queries). Automatic idling reduces the cost of your service, as you're not billed for compute resources when the service is paused.
-
-### Adaptive Idling {#adaptive-idling}
-ClickHouse Cloud implements adaptive idling to prevent disruptions while optimizing cost savings. The system evaluates several conditions before transitioning a service to idle. Adaptive idling overrides the idling duration setting when any of the below listed conditions are met:
-- When the number of parts exceeds the maximum idle parts threshold (default: 10,000), the service isn't idled so that background maintenance can continue
-- When there are ongoing merge operations, the service isn't idled until those merges complete to avoid interrupting critical data consolidation
-- Additionally, the service also adapts idle timeouts based on server initialization time:
- - If server initialization time is less than 15 minutes, no adaptive timeout is applied and the customer-configured default idle timeout is used
- - If server initialization time is between 15 and 30 minutes, the idle timeout is set to 15 minutes
- - If server initialization time is between 30 and 60 minutes, the idle timeout is set to 30 minutes.
- - If server initialization time is more than 60 minutes, the idle timeout is set to 1 hour
-
-:::note
-The service may enter an idle state where it suspends refreshes of [refreshable materialized views](/materialized-view/refreshable-materialized-view), consumption from [S3Queue](/engines/table-engines/integrations/s3queue), and scheduling of new merges. Existing merge operations will complete before the service transitions to the idle state. To ensure continuous operation of refreshable materialized views and S3Queue consumption, disable the idle state functionality.
-:::
-
-:::danger When not to use automatic idling
-Use automatic idling only if your use case can handle a delay before responding to queries, because when a service is paused, connections to the service will time out. Automatic idling is ideal for services that are used infrequently and where a delay can be tolerated. It isn't recommended for services that power customer-facing features that are used frequently.
-:::
-
-## Handling spikes in workload {#handling-bursty-workloads}
-
-If you have an upcoming expected spike in your workload, you can use the
-[ClickHouse Cloud API](/cloud/manage/api/api-overview) to
-preemptively scale up your service to handle the spike and scale it down once
-the demand subsides.
-
-To understand the current CPU cores and memory in use for
-each of your replicas, you can run the query below:
-
-```sql
-SELECT *
-FROM clusterAllReplicas('default', view(
- SELECT
- hostname() AS server,
- anyIf(value, metric = 'CGroupMaxCPU') AS cpu_cores,
- formatReadableSize(anyIf(value, metric = 'CGroupMemoryTotal')) AS memory
- FROM system.asynchronous_metrics
-))
-ORDER BY server ASC
-SETTINGS skip_unavailable_shards = 1
-```
diff --git a/docs/cloud/features/04_infrastructure/_category_.json b/docs/cloud/features/05_infrastructure/_category_.json
similarity index 100%
rename from docs/cloud/features/04_infrastructure/_category_.json
rename to docs/cloud/features/05_infrastructure/_category_.json
diff --git a/docs/cloud/features/04_infrastructure/deployment-options.md b/docs/cloud/features/05_infrastructure/deployment-options.md
similarity index 100%
rename from docs/cloud/features/04_infrastructure/deployment-options.md
rename to docs/cloud/features/05_infrastructure/deployment-options.md
diff --git a/docs/cloud/features/04_infrastructure/replica-aware-routing.md b/docs/cloud/features/05_infrastructure/replica-aware-routing.md
similarity index 100%
rename from docs/cloud/features/04_infrastructure/replica-aware-routing.md
rename to docs/cloud/features/05_infrastructure/replica-aware-routing.md
diff --git a/docs/cloud/features/04_infrastructure/shared-catalog.md b/docs/cloud/features/05_infrastructure/shared-catalog.md
similarity index 100%
rename from docs/cloud/features/04_infrastructure/shared-catalog.md
rename to docs/cloud/features/05_infrastructure/shared-catalog.md
diff --git a/docs/cloud/features/04_infrastructure/shared-merge-tree.md b/docs/cloud/features/05_infrastructure/shared-merge-tree.md
similarity index 100%
rename from docs/cloud/features/04_infrastructure/shared-merge-tree.md
rename to docs/cloud/features/05_infrastructure/shared-merge-tree.md
diff --git a/docs/cloud/features/04_infrastructure/warehouses.md b/docs/cloud/features/05_infrastructure/warehouses.md
similarity index 100%
rename from docs/cloud/features/04_infrastructure/warehouses.md
rename to docs/cloud/features/05_infrastructure/warehouses.md
diff --git a/docs/cloud/features/05_admin_features/_category_.json b/docs/cloud/features/06_admin_features/_category_.json
similarity index 100%
rename from docs/cloud/features/05_admin_features/_category_.json
rename to docs/cloud/features/06_admin_features/_category_.json
diff --git a/docs/cloud/features/05_admin_features/api/api-overview.md b/docs/cloud/features/06_admin_features/api/api-overview.md
similarity index 100%
rename from docs/cloud/features/05_admin_features/api/api-overview.md
rename to docs/cloud/features/06_admin_features/api/api-overview.md
diff --git a/docs/cloud/features/05_admin_features/api/index.md b/docs/cloud/features/06_admin_features/api/index.md
similarity index 100%
rename from docs/cloud/features/05_admin_features/api/index.md
rename to docs/cloud/features/06_admin_features/api/index.md
diff --git a/docs/cloud/features/05_admin_features/api/openapi.md b/docs/cloud/features/06_admin_features/api/openapi.md
similarity index 100%
rename from docs/cloud/features/05_admin_features/api/openapi.md
rename to docs/cloud/features/06_admin_features/api/openapi.md
diff --git a/docs/cloud/features/05_admin_features/api/postman.md b/docs/cloud/features/06_admin_features/api/postman.md
similarity index 100%
rename from docs/cloud/features/05_admin_features/api/postman.md
rename to docs/cloud/features/06_admin_features/api/postman.md
diff --git a/docs/cloud/features/05_admin_features/upgrades.md b/docs/cloud/features/06_admin_features/upgrades.md
similarity index 100%
rename from docs/cloud/features/05_admin_features/upgrades.md
rename to docs/cloud/features/06_admin_features/upgrades.md
diff --git a/docs/cloud/features/06_security.md b/docs/cloud/features/07_security.md
similarity index 100%
rename from docs/cloud/features/06_security.md
rename to docs/cloud/features/07_security.md
diff --git a/docs/cloud/features/07_monitoring/_category_.json b/docs/cloud/features/08_monitoring/_category_.json
similarity index 100%
rename from docs/cloud/features/07_monitoring/_category_.json
rename to docs/cloud/features/08_monitoring/_category_.json
diff --git a/docs/cloud/features/07_monitoring/advanced_dashboard.md b/docs/cloud/features/08_monitoring/advanced_dashboard.md
similarity index 100%
rename from docs/cloud/features/07_monitoring/advanced_dashboard.md
rename to docs/cloud/features/08_monitoring/advanced_dashboard.md
diff --git a/docs/cloud/features/07_monitoring/cloud-console.md b/docs/cloud/features/08_monitoring/cloud-console.md
similarity index 100%
rename from docs/cloud/features/07_monitoring/cloud-console.md
rename to docs/cloud/features/08_monitoring/cloud-console.md
diff --git a/docs/cloud/features/07_monitoring/cloud-notifications.md b/docs/cloud/features/08_monitoring/cloud-notifications.md
similarity index 100%
rename from docs/cloud/features/07_monitoring/cloud-notifications.md
rename to docs/cloud/features/08_monitoring/cloud-notifications.md
diff --git a/docs/cloud/features/07_monitoring/integrations.md b/docs/cloud/features/08_monitoring/integrations.md
similarity index 100%
rename from docs/cloud/features/07_monitoring/integrations.md
rename to docs/cloud/features/08_monitoring/integrations.md
diff --git a/docs/cloud/features/07_monitoring/overview.md b/docs/cloud/features/08_monitoring/overview.md
similarity index 100%
rename from docs/cloud/features/07_monitoring/overview.md
rename to docs/cloud/features/08_monitoring/overview.md
diff --git a/docs/cloud/features/07_monitoring/prometheus.md b/docs/cloud/features/08_monitoring/prometheus.md
similarity index 100%
rename from docs/cloud/features/07_monitoring/prometheus.md
rename to docs/cloud/features/08_monitoring/prometheus.md
diff --git a/docs/cloud/features/07_monitoring/system-tables.md b/docs/cloud/features/08_monitoring/system-tables.md
similarity index 100%
rename from docs/cloud/features/07_monitoring/system-tables.md
rename to docs/cloud/features/08_monitoring/system-tables.md
diff --git a/docs/cloud/features/08_backups.md b/docs/cloud/features/09_backups.md
similarity index 100%
rename from docs/cloud/features/08_backups.md
rename to docs/cloud/features/09_backups.md
diff --git a/docs/cloud/features/09_AI_ML/AI_chat_overview.md b/docs/cloud/features/10_AI_ML/AI_chat_overview.md
similarity index 100%
rename from docs/cloud/features/09_AI_ML/AI_chat_overview.md
rename to docs/cloud/features/10_AI_ML/AI_chat_overview.md
diff --git a/docs/cloud/features/09_AI_ML/_category_.json b/docs/cloud/features/10_AI_ML/_category_.json
similarity index 100%
rename from docs/cloud/features/09_AI_ML/_category_.json
rename to docs/cloud/features/10_AI_ML/_category_.json
diff --git a/docs/cloud/features/09_AI_ML/index.md b/docs/cloud/features/10_AI_ML/index.md
similarity index 100%
rename from docs/cloud/features/09_AI_ML/index.md
rename to docs/cloud/features/10_AI_ML/index.md
diff --git a/docs/cloud/features/09_AI_ML/langfuse.md b/docs/cloud/features/10_AI_ML/langfuse.md
similarity index 100%
rename from docs/cloud/features/09_AI_ML/langfuse.md
rename to docs/cloud/features/10_AI_ML/langfuse.md
diff --git a/docs/cloud/features/09_AI_ML/remote_mcp_overview.md b/docs/cloud/features/10_AI_ML/remote_mcp_overview.md
similarity index 100%
rename from docs/cloud/features/09_AI_ML/remote_mcp_overview.md
rename to docs/cloud/features/10_AI_ML/remote_mcp_overview.md
diff --git a/docs/cloud/features/10_support.md b/docs/cloud/features/11_support.md
similarity index 100%
rename from docs/cloud/features/10_support.md
rename to docs/cloud/features/11_support.md
diff --git a/docs/cloud/onboard/02_migrate/01_migration_guides/03_bigquery/01_overview.md b/docs/cloud/onboard/02_migrate/01_migration_guides/03_bigquery/01_overview.md
index 4372e5b8b08..a8d1a0f6821 100644
--- a/docs/cloud/onboard/02_migrate/01_migration_guides/03_bigquery/01_overview.md
+++ b/docs/cloud/onboard/02_migrate/01_migration_guides/03_bigquery/01_overview.md
@@ -41,7 +41,7 @@ ClickHouse Cloud currently has no concept equivalent to BigQuery folders.
### BigQuery Slot reservations and Quotas {#bigquery-slot-reservations-and-quotas}
-Like BigQuery slot reservations, you can [configure vertical and horizontal autoscaling](/manage/scaling#configuring-vertical-auto-scaling) in ClickHouse Cloud. For vertical autoscaling, you can set the minimum and maximum size for the memory and CPU cores of the compute nodes for a service. The service will then scale as needed within those bounds. These settings are also available during the initial service creation flow. Each compute node in the service has the same size. You can change the number of compute nodes within a service with [horizontal scaling](/manage/scaling#manual-horizontal-scaling).
+Like BigQuery slot reservations, you can [configure vertical and horizontal autoscaling](/cloud/features/autoscaling/vertical#configuring-vertical-auto-scaling) in ClickHouse Cloud. For vertical autoscaling, you can set the minimum and maximum size for the memory and CPU cores of the compute nodes for a service. The service will then scale as needed within those bounds. These settings are also available during the initial service creation flow. Each compute node in the service has the same size. You can change the number of compute nodes within a service with [horizontal scaling](/cloud/features/autoscaling/horizontal#manual-horizontal-scaling).
Furthermore, similar to BigQuery quotas, ClickHouse Cloud offers concurrency control, memory usage limits, and I/O scheduling, enabling you to isolate queries into workload classes. By setting limits on shared resources (CPU cores, DRAM, disk and network I/O) for specific workload classes, it ensures these queries don't affect other critical business queries. Concurrency control prevents thread oversubscription in scenarios with a high number of concurrent queries.
diff --git a/docs/cloud/reference/01_changelog/01_changelog.md b/docs/cloud/reference/01_changelog/01_changelog.md
index fc687a6aa33..6818ab44e9c 100644
--- a/docs/cloud/reference/01_changelog/01_changelog.md
+++ b/docs/cloud/reference/01_changelog/01_changelog.md
@@ -373,7 +373,7 @@ We are introducing a new vertical scaling mechanism for compute replicas, which
### Horizontal scaling (GA) {#horizontal-scaling-ga}
-Horizontal scaling is now Generally Available. You can add additional replicas to scale out their service through the APIs and the cloud console. Please refer to the [documentation](/manage/scaling#manual-horizontal-scaling) for information.
+Horizontal scaling is now Generally Available. You can add additional replicas to scale out their service through the APIs and the cloud console. Please refer to the [documentation](/cloud/features/autoscaling/horizontal#manual-horizontal-scaling) for information.
### Configurable backups {#configurable-backups}
@@ -1163,7 +1163,7 @@ This release brings the public release of the ClickHouse Cloud Programmatic API
- S3 access using IAM roles. You can now leverage IAM roles to securely access your private Amazon Simple Storage Service (S3) buckets (please contact support to set it up)
### Scaling changes {#scaling-changes}
-- [Horizontal scaling](/manage/scaling#manual-horizontal-scaling). Workloads that require more parallelization can now be configured with up to 10 replicas (please contact support to set it up)
+- [Horizontal scaling](/cloud/features/autoscaling/horizontal#manual-horizontal-scaling). Workloads that require more parallelization can now be configured with up to 10 replicas (please contact support to set it up)
- [CPU based autoscaling](/manage/scaling). CPU-bound workloads can now benefit from additional triggers for autoscaling policies
### Console changes {#console-changes-17}
diff --git a/docs/cloud/reference/02_architecture.md b/docs/cloud/reference/02_architecture.md
index 5387dcb2da8..62175fcbbf1 100644
--- a/docs/cloud/reference/02_architecture.md
+++ b/docs/cloud/reference/02_architecture.md
@@ -53,4 +53,4 @@ For GCP and Azure, services have object storage isolation (all services have the
There is no limit to the number of queries per second (QPS) in your ClickHouse Cloud service. There is, however, a limit of 1000 concurrent queries per replica. QPS is ultimately a function of your average query execution time and the number of replicas in your service.
-A major benefit of ClickHouse Cloud compared to a self-managed ClickHouse instance or other databases/data warehouses is that you can easily increase concurrency by [adding more replicas (horizontal scaling)](/manage/scaling#manual-horizontal-scaling).
+A major benefit of ClickHouse Cloud compared to a self-managed ClickHouse instance or other databases/data warehouses is that you can easily increase concurrency by [adding more replicas (horizontal scaling)](/cloud/features/autoscaling/horizontal#manual-horizontal-scaling).
diff --git a/docs/getting-started/quick-start/cloud.mdx b/docs/getting-started/quick-start/cloud.mdx
index 9345f34e042..cacc0e70590 100644
--- a/docs/getting-started/quick-start/cloud.mdx
+++ b/docs/getting-started/quick-start/cloud.mdx
@@ -47,7 +47,7 @@ Once you're logged in, ClickHouse Cloud starts the onboarding wizard which walks
-By default, new organizations are put on the Scale tier and will create 3 replicas each with 4 VCPUs and 16 GiB RAM. [Vertical autoscaling](/manage/scaling#vertical-auto-scaling) will be enabled by default in the Scale tier. You can change your organization tier later on the 'Plans' page.
+By default, new organizations are put on the Scale tier and will create 3 replicas each with 4 VCPUs and 16 GiB RAM. [Vertical autoscaling](/cloud/features/autoscaling/vertical#vertical-auto-scaling) will be enabled by default in the Scale tier. You can change your organization tier later on the 'Plans' page.
Customize the service resources if needed by specifying a minimum and maximum size for replicas to scale between. When ready, select `Create service`.
diff --git a/docs/integrations/data-ingestion/clickpipes/object-storage/amazon-s3/01_overview.md b/docs/integrations/data-ingestion/clickpipes/object-storage/amazon-s3/01_overview.md
index 312b0a920d2..ced9946b54f 100644
--- a/docs/integrations/data-ingestion/clickpipes/object-storage/amazon-s3/01_overview.md
+++ b/docs/integrations/data-ingestion/clickpipes/object-storage/amazon-s3/01_overview.md
@@ -206,7 +206,7 @@ ClickPipes provides sensible defaults that cover the requirements of most use ca
### Scaling {#scaling}
-Object Storage ClickPipes are scaled based on the minimum ClickHouse service size determined by the [configured vertical autoscaling settings](/manage/scaling#configuring-vertical-auto-scaling). The size of the ClickPipe is determined when the pipe is created. Subsequent changes to the ClickHouse service settings won't affect the ClickPipe size.
+Object Storage ClickPipes are scaled based on the minimum ClickHouse service size determined by the [configured vertical autoscaling settings](/cloud/features/autoscaling/vertical#configuring-vertical-auto-scaling). The size of the ClickPipe is determined when the pipe is created. Subsequent changes to the ClickHouse service settings won't affect the ClickPipe size.
To increase the throughput on large ingest jobs, we recommend scaling the ClickHouse service before creating the ClickPipe.
diff --git a/docs/integrations/data-ingestion/clickpipes/object-storage/azure-blob-storage/01_overview.md b/docs/integrations/data-ingestion/clickpipes/object-storage/azure-blob-storage/01_overview.md
index da2e1a71dd6..dd3e2d70f58 100644
--- a/docs/integrations/data-ingestion/clickpipes/object-storage/azure-blob-storage/01_overview.md
+++ b/docs/integrations/data-ingestion/clickpipes/object-storage/azure-blob-storage/01_overview.md
@@ -150,7 +150,7 @@ ClickPipes provides sensible defaults that cover the requirements of most use ca
### Scaling {#scaling}
-Object Storage ClickPipes are scaled based on the minimum ClickHouse service size determined by the [configured vertical autoscaling settings](/manage/scaling#configuring-vertical-auto-scaling). The size of the ClickPipe is determined when the pipe is created. Subsequent changes to the ClickHouse service settings won't affect the ClickPipe size.
+Object Storage ClickPipes are scaled based on the minimum ClickHouse service size determined by the [configured vertical autoscaling settings](/cloud/features/autoscaling/vertical#configuring-vertical-auto-scaling). The size of the ClickPipe is determined when the pipe is created. Subsequent changes to the ClickHouse service settings won't affect the ClickPipe size.
To increase the throughput on large ingest jobs, we recommend scaling the ClickHouse service before creating the ClickPipe.
diff --git a/docs/integrations/data-ingestion/clickpipes/object-storage/google-cloud-storage/01_overview.md b/docs/integrations/data-ingestion/clickpipes/object-storage/google-cloud-storage/01_overview.md
index fc1333400f2..bdcd5a7f9ec 100644
--- a/docs/integrations/data-ingestion/clickpipes/object-storage/google-cloud-storage/01_overview.md
+++ b/docs/integrations/data-ingestion/clickpipes/object-storage/google-cloud-storage/01_overview.md
@@ -162,7 +162,7 @@ ClickPipes provides sensible defaults that cover the requirements of most use ca
### Scaling {#scaling}
-Object Storage ClickPipes are scaled based on the minimum ClickHouse service size determined by the [configured vertical autoscaling settings](/manage/scaling#configuring-vertical-auto-scaling). The size of the ClickPipe is determined when the pipe is created. Subsequent changes to the ClickHouse service settings won't affect the ClickPipe size.
+Object Storage ClickPipes are scaled based on the minimum ClickHouse service size determined by the [configured vertical autoscaling settings](/cloud/features/autoscaling/vertical#configuring-vertical-auto-scaling). The size of the ClickPipe is determined when the pipe is created. Subsequent changes to the ClickHouse service settings won't affect the ClickPipe size.
To increase the throughput on large ingest jobs, we recommend scaling the ClickHouse service before creating the ClickPipe.
diff --git a/docs/use-cases/observability/clickstack/deployment/_snippets/_select_observability_resources.md b/docs/use-cases/observability/clickstack/deployment/_snippets/_select_observability_resources.md
index ca255987b05..9e807e17b05 100644
--- a/docs/use-cases/observability/clickstack/deployment/_snippets/_select_observability_resources.md
+++ b/docs/use-cases/observability/clickstack/deployment/_snippets/_select_observability_resources.md
@@ -9,7 +9,7 @@ This should be a rough estimate of the amount of data you have, either logs or t
-This estimate will be used to size the compute supporting your Managed ClickStack service. By default, new organizations are put on the [Scale tier](/cloud/manage/cloud-tiers). [Vertical autoscaling](/manage/scaling#vertical-auto-scaling) will be enabled by default in the Scale tier. You can change your organization tier later on the 'Plans' page.
+This estimate will be used to size the compute supporting your Managed ClickStack service. By default, new organizations are put on the [Scale tier](/cloud/manage/cloud-tiers). [Vertical autoscaling](/cloud/features/autoscaling/vertical#vertical-auto-scaling) will be enabled by default in the Scale tier. You can change your organization tier later on the 'Plans' page.
Advanced users with an understanding of their requirements can alternatively specify the exact resources provisioned, as well as any enterprise features, by selecting 'Custom Configuration' from the 'Memory and Scaling' dropdown.
diff --git a/scripts/aspell-ignore/en/aspell-dict.txt b/scripts/aspell-ignore/en/aspell-dict.txt
index b19bacdbe96..5428b1e8a3f 100644
--- a/scripts/aspell-ignore/en/aspell-dict.txt
+++ b/scripts/aspell-ignore/en/aspell-dict.txt
@@ -3313,6 +3313,7 @@ recompress
recompressed
recompressing
recompression
+recommender
reconfiguring
reconnection
recurse
diff --git a/static/images/cloud/features/autoscaling/scheduled-scaling-1.png b/static/images/cloud/features/autoscaling/scheduled-scaling-1.png
new file mode 100644
index 00000000000..e7996a3c6b8
Binary files /dev/null and b/static/images/cloud/features/autoscaling/scheduled-scaling-1.png differ
diff --git a/static/images/cloud/features/autoscaling/scheduled-scaling-2.png b/static/images/cloud/features/autoscaling/scheduled-scaling-2.png
new file mode 100644
index 00000000000..bccc436a939
Binary files /dev/null and b/static/images/cloud/features/autoscaling/scheduled-scaling-2.png differ