-
Notifications
You must be signed in to change notification settings - Fork 465
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot using k8s external ip address in the remote_servers.xml configuration? #1551
Comments
It's 2 different problems. |
hi @UnamedRus, thank you for sharing the suggestion. I updated my yaml to config the interserver_http_host parameter, however I tried serval ways, but the replica is still not work, this my new part in the yaml 2024.11.04 10:43:21.593349 [ 764 ] {} HTTP-Session: 3bf65257-0169-4431-a753-17d4c7c79aad Logout, user_id: 78dfa8ab-fce4-cf99-3aa4-eee47478eda1 |
@chaos827 , why do you need external IPs for replication? Are you sure it is routable at all? One thing to try is to use FQDN for replicas, maybe it will help, but what you are doing sounds strange in general
|
yes it is weird, because I want to set up ClickHouse cross region (cross AKS), the pod IP is dynamic, so I have to create Load Balance service for each replica. |
I created a 2 * 2 ClickHouse DB (2 Shards and each Shard has 2 replicas) using ClickHouse-Operator and azure Kubernetes(AKS), meanwhile I created load balance services for each replica (each pod has its own load balance service and unique external IP), it works well.
After that I updated the remote_servers xml file using the external ip instead of hostname, in this way the distribute query (i.e: create database TestDB on CLUSTER '{cluster}' ENGINE = Atomic) is not worked on the pod which with external ip, also the ReplicatedMergreeTree is not synced the data in the same pod, but the pod work well when I used HostName or Pod Ip, below is my remote_servers configuration in the yaml,
config.d/remote_servers.xml:
<remote_servers>
<internal_replication>true</internal_replication>
chi-p1-testcluster-0-0-0.chi-p1-testcluster-0-0.clickhouse1.svc.cluster.local
9000
test
test123
0
chi-p1-testcluster-0-1-0.chi-p1-testcluster-0-1.clickhouse1.svc.cluster.local
9000
test
test123
0
I did this test beacuse I want to set up the ClickHouse in the different data center (replica1 in primary and replica2 in the geolocation), so I have to split the ClickHouse in the two AKS, and using external ip to communicate, but I do not undershand why my yaml is not work, does someone know the root causes?
many thanks!
The text was updated successfully, but these errors were encountered: