Skip to content

Commit

Permalink
Merge conflicts from 2.2.0
Browse files Browse the repository at this point in the history
  • Loading branch information
subkanthi committed May 27, 2024
2 parents b2ad682 + 53ac6bc commit 0ed0f50
Show file tree
Hide file tree
Showing 89 changed files with 4,364 additions and 543 deletions.
68 changes: 67 additions & 1 deletion .github/workflows/testflows-sink-connector-lightweight.yml
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ jobs:

- name: Upload artifacts to Altinity Test Reports S3 bucket
if: ${{ github.event.pull_request.head.repo.full_name != 'Altinity/clickhouse-sink-connector' && github.event_name != 'workflow_dispatch' }}
working-directory: sink-connector/tests/integration/logs
working-directory: sink-connector-lightweight/tests/integration/logs
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
Expand All @@ -109,3 +109,69 @@ jobs:
sink-connector-lightweight/tests/integration/env/auto/configs/*.yml
if-no-files-found: error
retention-days: 60
testflows-lightweight-replicated:
runs-on: [self-hosted, on-demand, type-cpx51, image-x86-app-docker-ce]

steps:
- uses: actions/checkout@v2

- uses: actions/download-artifact@v3
if: ${{ github.event.pull_request.head.repo.full_name != 'Altinity/clickhouse-sink-connector' && github.event_name != 'workflow_dispatch' }}
with:
name: clickhouse-sink-connector_${{ github.event.number }}-${{ github.sha }}-lt.tar.gz

- name: Load Docker image
if: ${{ github.event.pull_request.head.repo.full_name != 'Altinity/clickhouse-sink-connector' && github.event_name != 'workflow_dispatch' }}
run: |
docker load < clickhouse-sink-connector_${{ github.event.number }}-${{ github.sha }}-lt.tar.gz
docker image ls
- name: Runner ssh command
working-directory: sink-connector/tests/integration
run: echo "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@$(hostname -I | cut -d ' ' -f 1)"

- name: Install all dependencies
working-directory: sink-connector-lightweight/tests/integration
run: pip3 install -r requirements.txt

- name: Get current date
id: date
run: echo "date=$(date +'%Y-%m-%d_%H%M%S')" >> $GITHUB_OUTPUT

- name: Add ~./local/bin to the PATH
if: always()
working-directory: sink-connector-lightweight/tests/integration
run: echo ~/.local/bin >> $GITHUB_PATH

- name: Run testflows tests
working-directory: sink-connector-lightweight/tests/integration
run: python3 -u regression.py --only "/mysql to clickhouse replication/auto replicated table creation/${{ inputs.extra_args != '' && inputs.extra_args || '*' }}" --clickhouse-binary-path="${{inputs.package}}" --test-to-end -o classic --collect-service-logs --attr project="${GITHUB_REPOSITORY}" project.id="$GITHUB_RUN_NUMBER" user.name="$GITHUB_ACTOR" github_actions_run="$GITHUB_SERVER_URL/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID" sink_version="registry.gitlab.com/altinity-public/container-images/clickhouse_debezium_embedded:latest" s3_url="https://altinity-test-reports.s3.amazonaws.com/index.html#altinity-sink-connector/testflows/${{ steps.date.outputs.date }}_${{github.run.number}}/" --log logs/raw.log

- name: Create tfs results report
if: always()
working-directory: sink-connector-lightweight/tests/integration/logs
run: cat raw.log | tfs report results | tfs document convert > report.html

- name: Create tfs coverage report
if: always()
working-directory: sink-connector-lightweight/tests/integration/logs
run: cat raw.log | tfs report coverage ../requirements/requirements.py | tfs document convert > coverage.html

- name: Upload artifacts to Altinity Test Reports S3 bucket
if: ${{ github.event.pull_request.head.repo.full_name != 'Altinity/clickhouse-sink-connector' && github.event_name != 'workflow_dispatch' }}
working-directory: sink-connector-lightweight/tests/integration/logs
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'eu-west-2'
run: aws s3 cp . s3://altinity-test-reports/altinity-sink-connector/testflows/${{ steps.date.outputs.date }}_sink_lw/ --recursive --exclude "*" --include "*.log" --include "*.html"

- uses: actions/upload-artifact@v3
if: always()
with:
name: testflows-sink-connector-lightweight-replicated-artefacts
path: |
sink-connector-lightweight/tests/integration/logs/*.log
sink-connector-lightweight/tests/integration/env/auto_replicated/configs/*.yml
if-no-files-found: error
retention-days: 60
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@ First two are good tutorials on MySQL and PostgreSQL respectively.

### Development

* [Development](doc/development.md)
* [Testing](doc/TESTING.md)

## Roadmap
Expand Down
24 changes: 15 additions & 9 deletions doc/Monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,24 +76,30 @@ record_insert_seq:

```select event_time, database, table, rows, duration_ms,size_in_bytes from system.part_log where table='table' and event_type='NewPart' and event_time > now () - interval 30 minute and database='db' ;```


## Sink Connector (Kafka) monitoring

Sink Connector Config
OpenJDK 11.0.14.1

-Xms256M, -Xmx2G,

## Grafana Dashboard
JMX metrics of sink connector are exposed through the port

The JMX_exporter docker image scrapes the JMX metrics from the sink connector
The metrics can be read through the following URL
http://localhost:9072/metrics

A Grafana dashboard is included to view JMX metrics.
The docker-compose launches Grafana application which can be accessed in **http://localhost:3000**
The default username/password is `admin/admin`
![](img/Grafana_dashboard.png)
![](img/Grafana_dashboard_2.png)


**Memory**

Sink Connector Config
OpenJDK 11.0.14.1

-Xms256M, -Xmx2G,



Throughput
**Throughput**
Increase the `fetch.min.bytes` property to increase the size of message
consumed \
[1] https://strimzi.io/blog/2021/01/07/consumer-tuning/
Expand Down
36 changes: 36 additions & 0 deletions doc/development.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
### Build Sink Connector from sources.


Requirements
- Java JDK 17 (https://openjdk.java.net/projects/jdk/11/)
- Maven (mvn) (https://maven.apache.org/download.cgi)
- Docker and Docker-compose

Install JDK(For Mac)
```
brew install openjdk@17
export JAVA_HOME=/opt/homebrew/Cellar/openjdk@17/17.0.11/libexec/openjdk.jdk/Contents/Home/
mvn -v
# verify it's actual openjdk 17 used and continue with steps
```

1. Clone the ClickHouse Sink connector repository:
```bash
git clone [email protected]:Altinity/clickhouse-sink-connector.git
```

2. Build the ClickHouse Sink connector Library:
This builds the requirement for sink connector lightweight`<sink-connector-library-version>0.0.8</sink-connector-library-version>`

```bash
cd sink-connector
mvn install -DskipTests=true
```

3. Build the ClickHouse Lightweight connector:
```bash
cd ../sink-connector-lightweight
mvn install -DskipTests=true
```

The JAR file will be created in the `target` directory.
42 changes: 42 additions & 0 deletions release-notes/2.1.0.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
## What's Changed

## Breaking Changes.
The configuration `clickhouse.server.database` is now deprecated with the multiple database support.
By default the source MySQL/postgres database name will be used as the ClickHouse database name.

## Changes
* Release 2.0.2 by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/510
* Added release notes for 2.0.2 by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/527
* add 1 second delay after query execution by @Selfeer in https://github.com/Altinity/clickhouse-sink-connector/pull/537
* Update README.md by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/539
* Update Monitoring.md by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/541
* Change index_granularity to 8192 instead of 8198. by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/534
* Refactor TestFlows tests related to Lightweight by @Selfeer in https://github.com/Altinity/clickhouse-sink-connector/pull/543
* Update config.yml to include database.server.id by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/544
* Update Troubleshooting.md by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/545
* Update Monitoring.md to include insert duration query and part log query by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/546
* Removed references to deduplication.policy in kafka configuration by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/547
* Use sequence number + timestamp in non-gtid mode for version column. by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/550
* Added logic to support multiple databases by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/535
* 523 handle scenario when records could be inserted with the same timestampnon gtid mode by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/552
* [528] Added logic to create view for replica_source_info table by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/549
* Replaced slf4j calls with log4j2 api calls by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/553
* Can't load table from Postgres to Clickhouse containing nullable numeric column by @ZlobnyiSerg in https://github.com/Altinity/clickhouse-sink-connector/pull/529
* Kafka fixes for multiple database. by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/555
* Added integration test to perform updates on PK to verify incrementin… by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/554
* Enable postgres tests by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/556
* Grafana - Fix prometheus targets by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/558
* Fixed logic of creating sequence number based on debezium timestamp, … by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/557
* Removed excessive logging statements by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/565
* Changed CREATE VIEW to CREATE VIEW IF NOT EXISTS by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/567
* Fix alter drop column by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/560
* Force RMT to old version for Integration tests by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/571
* Changed from ts_ms to debezium ts_ms for adding sequence numbers by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/569
* Fixed alter table change column not null DDL query by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/573
* Changes to make sure the threads are exited when the CLI stop command… by @subkanthi in https://github.com/Altinity/clickhouse-sink-connector/pull/525
* remove old broken tests by @Selfeer in https://github.com/Altinity/clickhouse-sink-connector/pull/585

## New Contributors
* @ZlobnyiSerg made their first contribution in https://github.com/Altinity/clickhouse-sink-connector/pull/529

**Full Changelog**: https://github.com/Altinity/clickhouse-sink-connector/compare/2.0.2...2.1.0
3 changes: 2 additions & 1 deletion sink-connector-lightweight/dependency-reduced-pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -72,13 +72,15 @@
</includes>
<properties>
<property>
<surefire.test.runOrder>filesystem</surefire.test.runOrder>
<name>listener</name>
<value>com.altinity.clickhouse.debezium.embedded.FailFastListener</value>
</property>
</properties>
<useUnlimitedThreads>true</useUnlimitedThreads>
<perCoreThreadCount>true</perCoreThreadCount>
<useSystemClassLoader>true</useSystemClassLoader>
<runOrder>${surefire.test.runOrder}</runOrder>
</configuration>
</plugin>
<plugin>
Expand Down Expand Up @@ -297,7 +299,6 @@
<quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id>
<version.testcontainers>1.19.1</version.testcontainers>
<surefire-plugin.version>3.0.0-M7</surefire-plugin.version>
<apache.httpclient.version>5.2.1</apache.httpclient.version>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<sink-connector-library-version>0.0.8</sink-connector-library-version>
<version.junit>5.9.1</version.junit>
Expand Down
2 changes: 1 addition & 1 deletion sink-connector-lightweight/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
<maven.compiler.target>17</maven.compiler.target>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<version.debezium>2.7.0.Alpha1</version.debezium>
<version.debezium>2.5.0.Beta1</version.debezium>
<version.junit>5.9.1</version.junit>
<version.testcontainers>1.19.1</version.testcontainers>
<version.checkstyle.plugin>3.1.1</version.checkstyle.plugin>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1288,36 +1288,6 @@ public void exitPartitionFunctionList(MySqlParser.PartitionFunctionListContext p

}

@Override
public void enterPartitionSystemVersion(MySqlParser.PartitionSystemVersionContext partitionSystemVersionContext) {

}

@Override
public void exitPartitionSystemVersion(MySqlParser.PartitionSystemVersionContext partitionSystemVersionContext) {

}

@Override
public void enterPartitionSystemVersionDefinitions(MySqlParser.PartitionSystemVersionDefinitionsContext partitionSystemVersionDefinitionsContext) {

}

@Override
public void exitPartitionSystemVersionDefinitions(MySqlParser.PartitionSystemVersionDefinitionsContext partitionSystemVersionDefinitionsContext) {

}

@Override
public void enterPartitionSystemVersionDefinition(MySqlParser.PartitionSystemVersionDefinitionContext partitionSystemVersionDefinitionContext) {

}

@Override
public void exitPartitionSystemVersionDefinition(MySqlParser.PartitionSystemVersionDefinitionContext partitionSystemVersionDefinitionContext) {

}

@Override
public void enterSubPartitionFunctionHash(MySqlParser.SubPartitionFunctionHashContext subPartitionFunctionHashContext) {

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
import com.clickhouse.data.ClickHouseDataType;
import io.debezium.antlr.DataTypeResolver;
import io.debezium.config.CommonConnectorConfig;
import io.debezium.connector.mysql.jdbc.MySqlValueConverters;
import io.debezium.connector.mysql.MySqlValueConverters;
import io.debezium.ddl.parser.mysql.generated.MySqlParser;
import io.debezium.jdbc.JdbcValueConverters;
import io.debezium.jdbc.TemporalPrecisionMode;
Expand Down Expand Up @@ -34,8 +34,7 @@ public static ClickHouseDataType convert(String columnName, MySqlParser.DataType
JdbcValueConverters.DecimalMode.PRECISE,
TemporalPrecisionMode.ADAPTIVE,
JdbcValueConverters.BigIntUnsignedMode.LONG,
CommonConnectorConfig.BinaryHandlingMode.BYTES,
x ->x, CommonConnectorConfig.EventConvertingFailureHandlingMode.WARN);
CommonConnectorConfig.BinaryHandlingMode.BYTES);


DataType dataType = initializeDataTypeResolver().resolveDataType(columnDefChild);
Expand All @@ -51,8 +50,7 @@ public static String convertToString(String columnName, int scale, int precision
JdbcValueConverters.DecimalMode.PRECISE,
TemporalPrecisionMode.ADAPTIVE,
JdbcValueConverters.BigIntUnsignedMode.LONG,
CommonConnectorConfig.BinaryHandlingMode.BYTES,
x ->x, CommonConnectorConfig.EventConvertingFailureHandlingMode.WARN
CommonConnectorConfig.BinaryHandlingMode.BYTES
);


Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
<clickhouse>
<timezone>Europe/Moscow</timezone>
<listen_host replace="replace">::</listen_host>
<path>/var/lib/clickhouse/</path>
<tmp_path>/var/lib/clickhouse/tmp/</tmp_path>
</clickhouse>
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
<?xml version="1.0"?>
<clickhouse>
<https_port>8443</https_port>
<tcp_port_secure>9440</tcp_port_secure>
<postgresql_port>9005</postgresql_port>
<mysql_port>9004</mysql_port>
</clickhouse>
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,10 @@
<replicated_cluster>
<shard>
<internal_replication>false</internal_replication>
<replica>
<host>clickhouse</host>
<port>9000</port>
</replica>
<replica>
<host>clickhouse1</host>
<port>9000</port>
Expand All @@ -27,6 +31,12 @@
</shard>
</replicated_cluster>
<sharded_cluster>
<shard>
<replica>
<host>clickhouse</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>clickhouse1</host>
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
<clickhouse>
<openSSL>
<server>
<certificateFile>/etc/clickhouse-server/ssl/server.crt</certificateFile>
<privateKeyFile>/etc/clickhouse-server/ssl/server.key</privateKeyFile>
<dhParamsFile>/etc/clickhouse-server/ssl/dhparam.pem</dhParamsFile>
<verificationMode>none</verificationMode>
<cacheSessions>true</cacheSessions>
</server>
<client>
<cacheSessions>true</cacheSessions>
<verificationMode>none</verificationMode>
<invalidCertificateHandler>
<name>AcceptCertificateHandler</name>
</invalidCertificateHandler>
</client>
</openSSL>
</clickhouse>
Loading

0 comments on commit 0ed0f50

Please sign in to comment.