Skip to content

Commit 0e7c110

Browse files
authored
feat: rdkafka 2.3, upgrade deps (#2)
* feat: rdkafka 2.3, upgrade deps * chore: add node 21 * chore: use custom pnpm store location * chore: examples to build arm based kafka
1 parent bfa4cb2 commit 0e7c110

18 files changed

+2343
-1724
lines changed

.husky/commit-msg

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
#!/bin/sh
22
. "$(dirname "$0")/_/husky.sh"
33

4-
"`$(npm bin)/mdep bin commitlint`" --edit $1
4+
"`npm x -- mdep bin commitlint`" --edit $1

.semaphore/semaphore.yml

Lines changed: 5 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -8,16 +8,16 @@ agent:
88
global_job_config:
99
prologue:
1010
commands:
11-
- sem-version node 18
12-
- curl -fsSL https://get.pnpm.io/install.sh | env PNPM_VERSION=7.25.1 sh -
11+
- sem-version node 20
12+
- curl -fsSL https://get.pnpm.io/install.sh | env PNPM_VERSION=8.10.2 sh -
1313
- source /home/semaphore/.bashrc
1414
- pnpm config set store-dir=~/.pnpm-store
1515
- checkout
1616
- git submodule init
1717
- git submodule update
1818
- cache restore node-$(checksum pnpm-lock.yaml)
1919
- pnpm i --frozen-lockfile --prefer-offline --ignore-scripts
20-
- cache store node-$(checksum pnpm-lock.yaml) ~/.pnpm-store
20+
- cache store node-$(checksum pnpm-lock.yaml) $(pnpm config get store-dir)
2121
env_vars:
2222
- name: BUILD_LIBRDKAFKA
2323
value: '0'
@@ -45,14 +45,8 @@ blocks:
4545
- name: pre-build & publish binaries
4646
matrix:
4747
- env_var: NODE_VER
48-
values: ["16", "18", "19"]
48+
values: ["18", "20", "21"]
4949
- env_var: platform
5050
values: ["-rdkafka", "-debian-rdkafka"]
5151
commands:
52-
- cp ~/.env.aws-s3-credentials .env
53-
- env IMAGE_TAG=${NODE_VER}${platform} UID=${UID} PNPM_STORE=$(pnpm config get store-dir) docker-compose up -d
54-
- docker-compose exec tester pnpm i --frozen-lockfile --prefer-offline --ignore-scripts
55-
- docker-compose exec tester pnpm binary:build
56-
- docker-compose exec tester pnpm binary:package
57-
- docker-compose exec tester pnpm binary:test
58-
- docker-compose exec tester pnpm binary:publish
52+
- ./ci/build-and-publish.sh

CONTRIBUTING.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -207,18 +207,18 @@ Steps to update:
207207
git checkout 063a9ae7a65cebdf1cc128da9815c05f91a2a996 # for version 1.8.2
208208
```
209209

210+
If you get an error during that checkout command, double check that the submodule was initialized / cloned! You may need to run `git submodule update --init --recursive`
211+
210212
1. Update [`config.d.ts`](https://github.com/Blizzard/node-rdkafka/blob/master/config.d.ts) and [`errors.d.ts`](https://github.com/Blizzard/node-rdkafka/blob/master/errors.d.ts) TypeScript definitions by running:
211213
```bash
212214
node ci/librdkafka-defs-generator.js
213215
```
214216
Note: This is ran automatically during CI flows but it's good to run it during the version upgrade pull request.
215217
216-
1. Run `npm install` to build with the new version and fix any build errors that occur.
218+
1. Run `npm install --lockfile-version 2` to build with the new version and fix any build errors that occur.
217219
218220
1. Run unit tests: `npm run test`
219221
220-
1. Run end to end tests: `npm run test:e2e`. This requires running kafka & zookeeper locally.
221-
222222
1. Update the version numbers referenced in the [`README.md`](https://github.com/Blizzard/node-rdkafka/blob/master/README.md) file to the new version.
223223
224224
## Publishing new npm version

README.md

Lines changed: 40 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ I am looking for *your* help to make this project even better! If you're interes
1717

1818
The `node-rdkafka` library is a high-performance NodeJS client for [Apache Kafka](http://kafka.apache.org/) that wraps the native [librdkafka](https://github.com/edenhill/librdkafka) library. All the complexity of balancing writes across partitions and managing (possibly ever-changing) brokers should be encapsulated in the library.
1919

20-
__This library currently uses `librdkafka` version `1.9.2`.__
20+
__This library currently uses `librdkafka` version `2.3.0`.__
2121

2222
## Reference Docs
2323

@@ -60,11 +60,7 @@ Using Alpine Linux? Check out the [docs](https://github.com/Blizzard/node-rdkafk
6060

6161
### Windows
6262

63-
<<<<<<< HEAD
64-
Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.1.8.2.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.
65-
=======
66-
Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.1.9.2.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.
67-
>>>>>>> 52b40e99abc811b2c4be1d3e62dd021e4bb1f6d4
63+
Windows build **is not** compiled from `librdkafka` source but it is rather linked against the appropriate version of [NuGet librdkafka.redist](https://www.nuget.org/packages/librdkafka.redist/) static binary that gets downloaded from `https://globalcdn.nuget.org/packages/librdkafka.redist.2.3.0.nupkg` during installation. This download link can be changed using the environment variable `NODE_RDKAFKA_NUGET_BASE_URL` that defaults to `https://globalcdn.nuget.org/packages/` when it's no set.
6864

6965
Requirements:
7066
* [node-gyp for Windows](https://github.com/nodejs/node-gyp#on-windows)
@@ -96,12 +92,12 @@ npm install node-rdkafka
9692
To use the module, you must `require` it.
9793

9894
```js
99-
var Kafka = require('node-rdkafka');
95+
const Kafka = require('node-rdkafka');
10096
```
10197

10298
## Configuration
10399

104-
You can pass many configuration options to `librdkafka`. A full list can be found in `librdkafka`'s [Configuration.md](https://github.com/edenhill/librdkafka/blob/v1.9.2/CONFIGURATION.md)
100+
You can pass many configuration options to `librdkafka`. A full list can be found in `librdkafka`'s [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.3.0/CONFIGURATION.md)
105101

106102
Configuration keys that have the suffix `_cb` are designated as callbacks. Some
107103
of these keys are informational and you can choose to opt-in (for example, `dr_cb`). Others are callbacks designed to
@@ -136,25 +132,25 @@ You can also get the version of `librdkafka`
136132
const Kafka = require('node-rdkafka');
137133
console.log(Kafka.librdkafkaVersion);
138134

139-
// #=> 1.9.2
135+
// #=> 2.3.0
140136
```
141137

142138
## Sending Messages
143139

144140
A `Producer` sends messages to Kafka. The `Producer` constructor takes a configuration object, as shown in the following example:
145141

146142
```js
147-
var producer = new Kafka.Producer({
143+
const producer = new Kafka.Producer({
148144
'metadata.broker.list': 'kafka-host1:9092,kafka-host2:9092'
149145
});
150146
```
151147

152-
A `Producer` requires only `metadata.broker.list` (the Kafka brokers) to be created. The values in this list are separated by commas. For other configuration options, see the [Configuration.md](https://github.com/edenhill/librdkafka/blob/v1.9.2/CONFIGURATION.md) file described previously.
148+
A `Producer` requires only `metadata.broker.list` (the Kafka brokers) to be created. The values in this list are separated by commas. For other configuration options, see the [Configuration.md](https://github.com/edenhill/librdkafka/blob/v2.3.0/CONFIGURATION.md) file described previously.
153149

154150
The following example illustrates a list with several `librdkafka` options set.
155151

156152
```js
157-
var producer = new Kafka.Producer({
153+
const producer = new Kafka.Producer({
158154
'client.id': 'kafka',
159155
'metadata.broker.list': 'localhost:9092',
160156
'compression.codec': 'gzip',
@@ -175,14 +171,14 @@ You can easily use the `Producer` as a writable stream immediately after creatio
175171
```js
176172
// Our producer with its Kafka brokers
177173
// This call returns a new writable stream to our topic 'topic-name'
178-
var stream = Kafka.Producer.createWriteStream({
174+
const stream = Kafka.Producer.createWriteStream({
179175
'metadata.broker.list': 'kafka-host1:9092,kafka-host2:9092'
180176
}, {}, {
181177
topic: 'topic-name'
182178
});
183179

184180
// Writes a message to the stream
185-
var queuedSuccess = stream.write(Buffer.from('Awesome message'));
181+
const queuedSuccess = stream.write(Buffer.from('Awesome message'));
186182

187183
if (queuedSuccess) {
188184
console.log('We queued our message!');
@@ -194,7 +190,7 @@ if (queuedSuccess) {
194190

195191
// NOTE: MAKE SURE TO LISTEN TO THIS IF YOU WANT THE STREAM TO BE DURABLE
196192
// Otherwise, any error will bubble up as an uncaught exception.
197-
stream.on('error', function (err) {
193+
stream.on('error', (err) => {
198194
// Here's where we'll know if something went wrong sending to Kafka
199195
console.error('Error in our kafka stream');
200196
console.error(err);
@@ -209,7 +205,7 @@ The Standard API is more performant, particularly when handling high volumes of
209205
However, it requires more manual setup to use. The following example illustrates its use:
210206

211207
```js
212-
var producer = new Kafka.Producer({
208+
const producer = new Kafka.Producer({
213209
'metadata.broker.list': 'localhost:9092',
214210
'dr_cb': true
215211
});
@@ -218,7 +214,7 @@ var producer = new Kafka.Producer({
218214
producer.connect();
219215

220216
// Wait for the ready event before proceeding
221-
producer.on('ready', function() {
217+
producer.on('ready', () => {
222218
try {
223219
producer.produce(
224220
// Topic to send the message to
@@ -243,7 +239,7 @@ producer.on('ready', function() {
243239
});
244240

245241
// Any errors we encounter, including connection errors
246-
producer.on('event.error', function(err) {
242+
producer.on('event.error', (err) => {
247243
console.error('Error from producer');
248244
console.error(err);
249245
})
@@ -283,7 +279,7 @@ Some configuration properties that end in `_cb` indicate that an event should be
283279
The following example illustrates an event:
284280

285281
```js
286-
var producer = new Kafka.Producer({
282+
const producer = new Kafka.Producer({
287283
'client.id': 'my-client', // Specifies an identifier to use to help trace activity in Kafka
288284
'metadata.broker.list': 'localhost:9092', // Connect to a Kafka instance on localhost
289285
'dr_cb': true // Specifies that we want a delivery-report event to be generated
@@ -292,7 +288,7 @@ var producer = new Kafka.Producer({
292288
// Poll for events every 100 ms
293289
producer.setPollInterval(100);
294290

295-
producer.on('delivery-report', function(err, report) {
291+
producer.on('delivery-report', (err, report) => {
296292
// Report of delivery statistics here:
297293
//
298294
console.log(report);
@@ -317,7 +313,7 @@ The following table describes types of events.
317313
The higher level producer is a variant of the producer which can propagate callbacks to you upon message delivery.
318314

319315
```js
320-
var producer = new Kafka.HighLevelProducer({
316+
const producer = new Kafka.HighLevelProducer({
321317
'metadata.broker.list': 'localhost:9092',
322318
});
323319
```
@@ -334,7 +330,7 @@ producer.produce(topicName, null, Buffer.from('alliance4ever'), null, Date.now()
334330
Additionally you can add serializers to modify the value of a produce for a key or value before it is sent over to Kafka.
335331

336332
```js
337-
producer.setValueSerializer(function(value) {
333+
producer.setValueSerializer((value) => {
338334
return Buffer.from(JSON.stringify(value));
339335
});
340336
```
@@ -346,7 +342,7 @@ Otherwise the behavior of the class should be exactly the same.
346342
To read messages from Kafka, you use a `KafkaConsumer`. You instantiate a `KafkaConsumer` object as follows:
347343

348344
```js
349-
var consumer = new Kafka.KafkaConsumer({
345+
const consumer = new Kafka.KafkaConsumer({
350346
'group.id': 'kafka',
351347
'metadata.broker.list': 'localhost:9092',
352348
}, {});
@@ -361,10 +357,10 @@ The `group.id` and `metadata.broker.list` properties are required for a consumer
361357
Rebalancing is managed internally by `librdkafka` by default. If you would like to override this functionality, you may provide your own logic as a rebalance callback.
362358

363359
```js
364-
var consumer = new Kafka.KafkaConsumer({
360+
const consumer = new Kafka.KafkaConsumer({
365361
'group.id': 'kafka',
366362
'metadata.broker.list': 'localhost:9092',
367-
'rebalance_cb': function(err, assignment) {
363+
'rebalance_cb': (err, assignment) => {
368364

369365
if (err.code === Kafka.CODES.ERRORS.ERR__ASSIGN_PARTITIONS) {
370366
// Note: this can throw when you are disconnected. Take care and wrap it in
@@ -389,10 +385,10 @@ var consumer = new Kafka.KafkaConsumer({
389385
When you commit in `node-rdkafka`, the standard way is to queue the commit request up with the next `librdkafka` request to the broker. When doing this, there isn't a way to know the result of the commit. Luckily there is another callback you can listen to to get this information
390386

391387
```js
392-
var consumer = new Kafka.KafkaConsumer({
388+
const consumer = new Kafka.KafkaConsumer({
393389
'group.id': 'kafka',
394390
'metadata.broker.list': 'localhost:9092',
395-
'offset_commit_cb': function(err, topicPartitions) {
391+
'offset_commit_cb': (err, topicPartitions) => {
396392

397393
if (err) {
398394
// There was an error committing
@@ -430,11 +426,11 @@ The stream API is the easiest way to consume messages. The following example ill
430426

431427
```js
432428
// Read from the librdtesting-01 topic... note that this creates a new stream on each call!
433-
var stream = KafkaConsumer.createReadStream(globalConfig, topicConfig, {
429+
const stream = KafkaConsumer.createReadStream(globalConfig, topicConfig, {
434430
topics: ['librdtesting-01']
435431
});
436432

437-
stream.on('data', function(message) {
433+
stream.on('data', (message) => {
438434
console.log('Got message');
439435
console.log(message.value.toString());
440436
});
@@ -459,15 +455,15 @@ The following example illustrates flowing mode:
459455
consumer.connect();
460456

461457
consumer
462-
.on('ready', function() {
458+
.on('ready', () => {
463459
consumer.subscribe(['librdtesting-01']);
464460

465461
// Consume from the librdtesting-01 topic. This is what determines
466462
// the mode we are running in. By not specifying a callback (or specifying
467463
// only a callback) we get messages as soon as they are available.
468464
consumer.consume();
469465
})
470-
.on('data', function(data) {
466+
.on('data', (data) => {
471467
// Output the actual message contents
472468
console.log(data.value.toString());
473469
});
@@ -478,17 +474,17 @@ The following example illustrates non-flowing mode:
478474
consumer.connect();
479475

480476
consumer
481-
.on('ready', function() {
477+
.on('ready', () => {
482478
// Subscribe to the librdtesting-01 topic
483479
// This makes subsequent consumes read from that topic.
484480
consumer.subscribe(['librdtesting-01']);
485481

486482
// Read one message every 1000 milliseconds
487-
setInterval(function() {
483+
setInterval(() => {
488484
consumer.consume(1);
489485
}, 1000);
490486
})
491-
.on('data', function(data) {
487+
.on('data', (data) => {
492488
console.log('Message found! Contents below.');
493489
console.log(data.value.toString());
494490
});
@@ -528,15 +524,15 @@ The following table lists events for this API.
528524
Some times you find yourself in the situation where you need to know the latest (and earliest) offset for one of your topics. Connected producers and consumers both allow you to query for these through `queryWaterMarkOffsets` like follows:
529525

530526
```js
531-
var timeout = 5000, partition = 0;
532-
consumer.queryWatermarkOffsets('my-topic', partition, timeout, function(err, offsets) {
533-
var high = offsets.highOffset;
534-
var low = offsets.lowOffset;
527+
const timeout = 5000, partition = 0;
528+
consumer.queryWatermarkOffsets('my-topic', partition, timeout, (err, offsets) => {
529+
const high = offsets.highOffset;
530+
const low = offsets.lowOffset;
535531
});
536532

537-
producer.queryWatermarkOffsets('my-topic', partition, timeout, function(err, offsets) {
538-
var high = offsets.highOffset;
539-
var low = offsets.lowOffset;
533+
producer.queryWatermarkOffsets('my-topic', partition, timeout, (err, offsets) => {
534+
const high = offsets.highOffset;
535+
const low = offsets.lowOffset;
540536
});
541537

542538
An error will be returned if the client was not connected or the request timed out within the specified interval.
@@ -582,12 +578,12 @@ When fetching metadata for a specific topic, if a topic reference does not exist
582578
Please see the documentation on `Client.getMetadata` if you want to set configuration parameters, e.g. `acks`, on a topic to produce messages to.
583579

584580
```js
585-
var opts = {
581+
const opts = {
586582
topic: 'librdtesting-01',
587583
timeout: 10000
588584
};
589585

590-
producer.getMetadata(opts, function(err, metadata) {
586+
producer.getMetadata(opts, (err, metadata) => {
591587
if (err) {
592588
console.error('Error getting metadata');
593589
console.error(err);
@@ -620,7 +616,7 @@ client.createTopic({
620616
topic: topicName,
621617
num_partitions: 1,
622618
replication_factor: 1
623-
}, function(err) {
619+
}, (err) => {
624620
// Done!
625621
});
626622
```

ci/build-and-publish.sh

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
#!/usr/bin/env bash
2+
3+
set -ex
4+
5+
# To prebuild images for arm platform run these on your local machine
6+
# this would cover everything we need (muslc / libc + 3 node versions)
7+
# Example: NODE_VER=18 platform=-rdkafka ./ci/build-and-publish.sh
8+
# Example: NODE_VER=20 platform=-rdkafka ./ci/build-and-publish.sh
9+
# Example: NODE_VER=21 platform=-rdkafka ./ci/build-and-publish.sh
10+
# Example: NODE_VER=18 platform=-debian-rdkafka ./ci/build-and-publish.sh
11+
# Example: NODE_VER=20 platform=-debian-rdkafka ./ci/build-and-publish.sh
12+
# Example: NODE_VER=21 platform=-debian-rdkafka ./ci/build-and-publish.sh
13+
14+
cp ~/.env.aws-s3-credentials .env
15+
env IMAGE_TAG=${NODE_VER}${platform} UID=${UID} PNPM_STORE=$(pnpm config get store-dir) docker-compose up -d
16+
docker-compose exec tester pnpm i --frozen-lockfile --prefer-offline --ignore-scripts
17+
docker-compose exec tester pnpm binary:build
18+
docker-compose exec tester pnpm binary:package
19+
docker-compose exec tester pnpm binary:test
20+
docker-compose exec tester pnpm binary:publish

0 commit comments

Comments
 (0)