Skip to content

Commit 66ef987

Browse files
committed
Updated broken webinar links, added internal links, etc.
1 parent 421e6d2 commit 66ef987

File tree

5 files changed

+41
-22
lines changed

5 files changed

+41
-22
lines changed

content/en/altinity-kb-schema-design/materialized-views/_index.md

Lines changed: 13 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,24 @@
11
---
2-
title: "MATERIALIZED VIEWS"
2+
title: "ClickHouse® MATERIALIZED VIEWS"
33
linkTitle: "MATERIALIZED VIEWS"
44
description: >
5-
MATERIALIZED VIEWS
5+
Making the most of this powerful ClickHouse® feature
6+
keywords:
7+
- clickhouse materialized view
8+
- create materialized view clickhouse
69
---
710

8-
MATERIALIZED VIEWs in ClickHouse® behave like AFTER INSERT TRIGGER to the left-most table listed in their SELECT statement and never read data from disk. Only rows that are placed to the RAM buffer by INSERT are read.
11+
ClickHouse® MATERIALIZED VIEWs behave like AFTER INSERT TRIGGER to the left-most table listed in their SELECT statement and never read data from disk. Only rows that are placed to the RAM buffer by INSERT are read.
912

1013
## Useful links
1114

12-
* ClickHouse and the magic of materialized views. Basics explained with examples: [webinar recording](https://altinity.com/webinarspage/2019/6/26/clickhouse-and-the-magic-of-materialized-views)
15+
* ClickHouse Materialized Views Illuminated, Part 1:
16+
* [Blog post](https://altinity.com/blog/clickhouse-materialized-views-illuminated-part-1)
17+
* [Webinar recording](https://altinity.com/blog/clickhouse-materialized-views-illuminated-part-1)
18+
* [Slides](https://altinity.com/blog/clickhouse-materialized-views-illuminated-part-1)
19+
* ClickHouse Materialized Views Illuminated, Part 2:
20+
* [Blog post](https://altinity.com/blog/clickhouse-materialized-views-illuminated-part-2)
21+
* [Webinar recording](https://www.youtube.com/watch?v=THDk625DGsQ)
1322
* Everything you should know about materialized views - [annotated presentation](https://den-crane.github.io/Everything_you_should_know_about_materialized_views_commented.pdf)
1423
* Very detailed information about internals: [video](https://youtu.be/ckChUkC3Pns?t=9353)
1524
* One more [presentation](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup47/materialized_views.pdf)

content/en/altinity-kb-setup-and-maintenance/altinity-kb-check-replication-ddl-queue.md

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,18 @@
11
---
2-
title: "Replication and DDL queue problems"
2+
title: "ClickHouse® Replication and DDL queue problems"
33
linkTitle: "Replication and DDL queue problems"
44
description: >
5-
This article describes how to detect possible problems in the `replication_queue` and `distributed_ddl_queue` and how to troubleshoot.
5+
Finding and troubleshooting problems in the `replication_queue` and `distributed_ddl_queue`
6+
keywords:
7+
- clickhouse replication
8+
- clickhouse ddl
9+
- clickhouse check replication status
10+
- clickhouse replication queue
611
---
712

8-
# How to check replication problems:
13+
# How to check ClickHouse® replication problems:
914

10-
1. check `system.replicas` first, cluster-wide. It allows to check if the problem is local to some replica or global, and allows to see the exception.
15+
1. Check `system.replicas` first, cluster-wide. It allows to check if the problem is local to some replica or global, and allows to see the exception.
1116
allows to answer the following questions:
1217
- Are there any ReadOnly replicas?
1318
- Is there the connection to zookeeper active?
@@ -90,7 +95,7 @@ FORMAT TSVRaw;
9095

9196
Sometimes due to crashes, zookeeper split brain problem or other reasons some of the tables can be in Read-Only mode. This allows SELECTS but not INSERTS. So we need to do DROP / RESTORE replica procedure.
9297

93-
Just to be clear, this procedure **will not delete any data**, it will just re-create the metadata in zookeeper with the current state of the ClickHouse replica.
98+
Just to be clear, this procedure **will not delete any data**, it will just re-create the metadata in zookeeper with the current state of the [ClickHouse replica](/altinity-kb-setup-and-maintenance/altinity-kb-data-migration/add_remove_replica/).
9499

95100
```sql
96101
DETACH TABLE table_name; -- Required for DROP REPLICA
@@ -171,7 +176,7 @@ restore_replica "$@"
171176

172177
### Stuck DDL tasks in the distributed_ddl_queue
173178

174-
Sometimes DDL tasks (the ones that use ON CLUSTER) can get stuck in the `distributed_ddl_queue` because the replicas can overload if multiple DDLs (thousands of CREATE/DROP/ALTER) are executed at the same time. This is very normal in heavy ETL jobs.This can be detected by checking the `distributed_ddl_queue` table and see if there are tasks that are not moving or are stuck for a long time.
179+
Sometimes [DDL tasks](/altinity-kb-setup-and-maintenance/altinity-kb-ddlworker/) (the ones that use ON CLUSTER) can get stuck in the `distributed_ddl_queue` because the replicas can overload if multiple DDLs (thousands of CREATE/DROP/ALTER) are executed at the same time. This is very normal in heavy ETL jobs.This can be detected by checking the `distributed_ddl_queue` table and see if there are tasks that are not moving or are stuck for a long time.
175180

176181
If these DDLs completed in some replicas but failed in others, the simplest way to solve this is to execute the failed command in the missed replicas without ON CLUSTER. If most of the DDLs failed then check the number of unfinished records in `distributed_ddl_queue` on the other nodes, because most probably it will be as high as thousands.
177182

content/en/altinity-kb-setup-and-maintenance/altinity-kb-data-migration/add_remove_replica.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,16 @@
22
title: "Add/Remove a new replica to a ClickHouse® cluster"
33
linkTitle: "add_remove_replica"
44
description: >
5-
How to add/remove a new replica manually and using clickhouse-backup
5+
How to add/remove a new ClickHouse replica manually and using `clickhouse-backup`
6+
keywords:
7+
- clickhouse replica
8+
- clickhouse add replica
9+
- clickhouse remove replica
610
---
711

812
## ADD nodes/replicas to a ClickHouse® cluster
913

10-
To add some replicas to an existing cluster if -30TB then better to use replication:
14+
To add some ClickHouse® replicas to an existing cluster if -30TB then better to use replication:
1115

1216
- don’t add the `remote_servers.xml` until replication is done.
1317
- Add these files and restart to limit bandwidth and avoid saturation (70% total bandwidth):
@@ -94,7 +98,7 @@ clickhouse-client --host localhost --port 9000 -mn < schema.sql
9498

9599
### Using `clickhouse-backup`
96100

97-
- Using `clickhouse-backup` to copy the schema of a replica to another is also convenient and moreover if using Atomic database with `{uuid}` macros in ReplicatedMergeTree engines:
101+
- Using `clickhouse-backup` to copy the schema of a replica to another is also convenient and moreover if [using Atomic database](/engines/altinity-kb-atomic-database-engine/) with `{uuid}` macros in [ReplicatedMergeTree engines](https://www.youtube.com/watch?v=oHwhXc0re6k):
98102

99103
```bash
100104
sudo -u clickhouse clickhouse-backup --schema --rbac create_remote full-replica
@@ -139,7 +143,7 @@ already exists. (REPLICA_ALREADY_EXISTS) (version 23.5.3.24 (official build)). (
139143
(query: CREATE TABLE IF NOT EXISTS xxxx.yyyy UUID '3c3503c3-ed3c-443b-9cb3-ef41b3aed0a8'
140144
```
141145

142-
The DDLs have been executed and some tables have been created and after that dropped but some left overs are left in ZK:
146+
[The DDLs](/altinity-kb-setup-and-maintenance/altinity-kb-check-replication-ddl-queue/) have been executed and some tables have been created and after that dropped but some left overs are left in ZK:
143147
- If databases can be dropped then use `DROP DATABASE xxxxx SYNC`
144148
- If databases cannot be dropped use `SYSTEM DROP REPLICA ‘replica_name’ FROM db.table`
145149

content/en/altinity-kb-setup-and-maintenance/altinity-kb-server-config-files.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,12 @@
11
---
2-
title: "Server config files"
2+
title: "Server configuration files"
33
linkTitle: "Server config files"
44
description: >
5-
How to manage server config files in ClickHouse®
5+
How to organize configuration files in ClickHouse® and how to manage changes
66
keywords:
77
- clickhouse config.xml
88
- clickhouse configuration
9+
weight: 105
910
---
1011

1112
## Сonfig management (recommended structure)
@@ -16,7 +17,7 @@ By default they are stored in the folder **/etc/clickhouse-server/** in two file
1617

1718
We suggest never change vendor config files and place your changes into separate .xml files in sub-folders. This way is easier to maintain and ease ClickHouse upgrades.
1819

19-
**/etc/clickhouse-server/users.d** – sub-folder for user settings (derived from `users.xml` filename).
20+
**/etc/clickhouse-server/users.d** – sub-folder for [user settings](/altinity-kb-setup-and-maintenance/rbac/) (derived from `users.xml` filename).
2021

2122
**/etc/clickhouse-server/config.d** – sub-folder for server settings (derived from `config.xml` filename).
2223

@@ -84,7 +85,7 @@ cat /etc/clickhouse-server/users.d/memory_usage.xml
8485
</clickhouse>
8586
```
8687

87-
BTW, you can define any macro in your configuration and use them in Zookeeper paths
88+
BTW, you can define any macro in your configuration and use them in [Zookeeper](https://docs.altinity.com/operationsguide/clickhouse-zookeeper/zookeeper-installation/) paths
8889

8990
```xml
9091
ReplicatedMergeTree('/clickhouse/{cluster}/tables/my_table','{replica}')
@@ -182,7 +183,7 @@ The list of user setting which require server restart:
182183

183184
See also `select * from system.settings where description ilike '%start%'`
184185

185-
Also there are several 'long-running' user sessions which are almost never restarted and can keep the setting from the server start (it's DDLWorker, Kafka, and some other service things).
186+
Also there are several 'long-running' user sessions which are almost never restarted and can keep the setting from the server start (it's DDLWorker, [Kafka](https://altinity.com/blog/kafka-engine-the-story-continues), and some other service things).
186187

187188
## Dictionaries
188189

content/en/altinity-kb-setup-and-maintenance/suspiciously-many-broken-parts.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,10 @@
22
title: "Suspiciously many broken parts"
33
linkTitle: "Suspiciously many broken parts"
44
description: >
5-
Suspiciously many broken parts error during the server startup.
5+
Debugging a common error message
66
keywords:
77
- clickhouse broken parts
8-
- clickhouse too many parts
8+
- clickhouse too many broken parts
99
---
1010

1111
## Symptom:
@@ -28,7 +28,7 @@ Why data could be corrupted?
2828

2929
## Action:
3030

31-
1. If you are ok to accept the data loss: set up `force_restore_data` flag and clickhouse will move the parts to detached. Data loss is possible if the issue is a result of misconfiguration (i.e. someone accidentally has fixed xml configs with incorrect shard/replica macros, data will be moved to detached folder and can be recovered).
31+
1. If you are ok to accept the [data loss](/altinity-kb-setup-and-maintenance/recovery-after-complete-data-loss/): set up `force_restore_data` flag and clickhouse will move the parts to detached. Data loss is possible if the issue is a result of misconfiguration (i.e. someone accidentally has fixed xml configs with incorrect [shard/replica macros](https://altinity.com/webinarspage/deep-dive-on-clickhouse-sharding-and-replication), data will be moved to detached folder and can be recovered).
3232

3333
```bash
3434
sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data

0 commit comments

Comments
 (0)