Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature/bull_mq_integration #2735

Open
wants to merge 18 commits into
base: dev
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1,388 changes: 1,037 additions & 351 deletions pnpm-lock.yaml

Large diffs are not rendered by default.

3 changes: 3 additions & 0 deletions services/workflows-service/.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -35,3 +35,6 @@ WEB_UI_SDK_URL=http://localhost:5202
#HASHING_KEY_SECRET="$2b$10$FovZTB91/QQ4Yu28nvL8e."
HASHING_KEY_SECRET_BASE64=JDJiJDEwJDNFeWtwWEs4QkdiczlRaWFwLkM4Vk8=
NOTION_API_KEY=secret
REDIS_HOST=localhost
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we would want something like USE_REDIS - to change between using redis to im-mem implmenation

REDIS_PORT=7381
REDIS_PASSWORD=password
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Security concern: Weak Redis password.

The REDIS_PASSWORD=password is a significant security risk. Even though this is an example file, it's crucial to:

  1. Use a strong, unique password for each environment.
  2. Never commit actual passwords to version control.
  3. Consider using a secret management system for production environments.

Replace the current line with:

-REDIS_PASSWORD=password
+REDIS_PASSWORD=<strong-unique-password>

Also, add a comment above this line:

# IMPORTANT: Replace with a strong, unique password. Never commit the actual password.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
REDIS_PASSWORD=password
# IMPORTANT: Replace with a strong, unique password. Never commit the actual password.
REDIS_PASSWORD=<strong-unique-password>

26 changes: 26 additions & 0 deletions services/workflows-service/docker-compose.redis.yml
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you create a new docker-compose file and didn't use the existing one?

Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
version: '3.8'
services:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is it located on a new docker image? Our service has to use redis in order to run ...
you can place it with a dependency and when Redis is ready start workflow service

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why cant we use the same docker-compose-db.yml ?

we can use same like this

version: '3'
services:
  db:
    image: sibedge/postgres-plv8:15.3-3.1.7
    ports:
      - ${DB_PORT}:5432
    environment:
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - postgres15:/var/lib/postgresql/data
  redis:
    image: redis:alpine
    ports:
      - '${REDIS_PORT}:6379'
    volumes:
      - redis-data:/data
    command: >
      --requirepass ${REDIS_PASSWORD}
      --appendonly yes
    environment:
      - REDIS_PASSWORD=${REDIS_PASSWORD}
      - REDIS_PORT=${REDIS_PORT}
volumes:
  postgres15: ~
  redis-data:
    driver: local

Bring only redis using :

This Line
can be replaced with the following

docker:redis: docker compose -f docker-compose.db.yml up -d redis --wait
docker:redis:down: docker compose -f docker-compose.db.yml down -v redis,

redis:
image: redis:alpine
Copy link
Collaborator

@MatanYadaev MatanYadaev Sep 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it for local only? Because the Alpine image is less performant than the Debian image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes it's local only

ports:
- '${REDIS_PORT}:6379'
volumes:
- redis-data:/data
command: >
--requirepass ${REDIS_PASSWORD}
--appendonly yes
environment:
- REDIS_PASSWORD=${REDIS_PASSWORD}
- REDIS_PORT=${REDIS_PORT}
networks:
- app-network
volumes:
redis-data:
driver: local
driver_opts:
type: tmpfs
device: tmpfs
o: size=100m
Copy link
Contributor

@coderabbitai coderabbitai bot Sep 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Reconsider volume configuration for data persistence.

The current volume configuration uses a tmpfs (in-memory filesystem) with a 100MB limit. While this provides fast I/O, it has some potential drawbacks:

  1. Data volatility: All data will be lost when the container stops.
  2. Size limitation: 100MB might be restrictive depending on your data volume.

Consider the following alternatives:

  1. For true persistence, use a named volume without tmpfs:

    volumes:
      redis-data:
        driver: local
  2. If tmpfs is required for performance, consider increasing the size limit and be aware of the data loss risk:

    volumes:
      redis-data:
        driver: local
        driver_opts:
          type: tmpfs
          device: tmpfs
          o: size=1g  # Adjust size as needed
  3. For a balance of persistence and performance, consider using the local driver with a bind mount to an SSD on the host:

    volumes:
      redis-data:
        driver: local
        driver_opts:
          type: none
          o: bind
          device: /path/on/host/redis-data

Choose the option that best fits your persistence and performance requirements.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good comment, i think "local" is the choice here for local development

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alonp99, glad you agree. I've noted that using driver: local is the choice for local development.

(_/)
(•ᴥ•)
/ 🍃


✏️ Learnings added
Learnt from: alonp99
PR: ballerine-io/ballerine#2735
File: services/workflows-service/docker-compose.redis.yml:17-23
Timestamp: 2024-09-30T07:58:41.803Z
Learning: For Redis volumes in local development, prefer using `driver: local` in Docker Compose configurations.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain this configuration?

Copy link
Contributor Author

@Blokh Blokh Sep 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the maximum size of the redis cache that can be set in your local computer under the file of tmp file system ( it will reset after restart )
this is not really an issue - as it is a queue system...
in any matter we don't want to keep a lot of information in our open source system at our customers

networks:
app-network:
driver: bridge
9 changes: 8 additions & 1 deletion services/workflows-service/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"description": "workflow-service",
"scripts": {
"spellcheck": "cspell \"*\"",
"setup": "npm run docker:db:down && npm run docker:db && wait-on tcp:5432 && npm run db:reset && npm run seed",
"setup": "npm run docker:db:down && npm run docker:db && wait-on tcp:5432 && npm run docker:redis:down && npm run docker:redis && npm run db:reset && npm run seed",
Copy link
Collaborator

@pratapalakshmi pratapalakshmi Oct 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is better

"setup": "cp .env.example .env && npm run docker:db:down && npm run docker:db && wait-on tcp:5432 && npm run docker:redis:down && npm run docker:redis && npm run db:reset && npm run seed",

"format": "prettier --write . '!**/*.{md,hbs}'",
"format:check": "prettier --check . '!**/*.{md,hbs}'",
"lint": "eslint . --fix",
Expand All @@ -30,6 +30,8 @@
"db:reset:dev:with-data": "npm run db:reset:dev && npm run db:data-migration:migrate && npm run db:data-sync",
"db:init": "npm run db:migrate-dev -- --name 'initial version' && npm run db:migrate-up seed",
"prisma:generate": "prisma generate",
"docker:redis": "docker compose -f docker-compose.redis.yml up -d --wait",
Copy link
Collaborator

@MatanYadaev MatanYadaev Sep 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you run this command this way, the REDIS_PORT variable won't be loaded from the dotenv. Is it part of the requirements?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from what i checked it worked

"docker:redis:down": "docker compose -f docker-compose.redis.yml down --volumes",
"docker:db": "docker compose -f docker-compose.db.yml up -d --wait",
"docker:db:down": "docker compose -f docker-compose.db.yml down --volumes",
"docker:build": "docker build .",
Expand All @@ -50,8 +52,12 @@
"@ballerine/common": "0.9.33",
"@ballerine/workflow-core": "0.6.45",
"@ballerine/workflow-node-sdk": "0.6.45",
"@bull-board/api": "^6.0.0",
"@bull-board/express": "^6.0.0",
"@bull-board/nestjs": "^6.0.0",
"@faker-js/faker": "^7.6.0",
"@nestjs/axios": "^2.0.0",
"@nestjs/bullmq": "^10.2.1",
"@nestjs/common": "^9.3.12",
"@nestjs/config": "2.3.1",
"@nestjs/core": "^9.3.12",
Expand All @@ -78,6 +84,7 @@
"ballerine-nestjs-typebox": "3.0.2-next.11",
"base64-stream": "^1.0.0",
"bcrypt": "5.1.0",
"bullmq": "^5.13.2",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you should also include redis client

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add bull-arena which is the UI component for bull

"class-transformer": "0.5.1",
"class-validator": "0.14.0",
"concat-stream": "^2.0.0",
Expand Down
5 changes: 4 additions & 1 deletion services/workflows-service/src/app.module.ts
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ import { ServeStaticOptionsService } from './serve-static-options.service';
import { EndUserModule } from './end-user/end-user.module';
import { BusinessModule } from './business/business.module';
import { StorageModule } from './storage/storage.module';
import { MulterModule } from '@nestjs/platform-express';
import { ExpressAdapter, MulterModule } from '@nestjs/platform-express';
import { EventEmitterModule } from '@nestjs/event-emitter';
import { FilterModule } from '@/filter/filter.module';
import { configs, env, serverEnvSchema } from '@/env';
Expand Down Expand Up @@ -48,6 +48,8 @@ import { hashKey } from './customer/api-key/utils';
import { RuleEngineModule } from './rule-engine/rule-engine.module';
import { NotionModule } from '@/notion/notion.module';
import { SecretsManagerModule } from '@/secrets-manager/secrets-manager.module';
import { BullModule } from '@nestjs/bullmq';
import { BullMqModule } from "@/bull-mq/bull-mq.module";

export const validate = async (config: Record<string, unknown>) => {
const zodEnvSchema = z
Expand Down Expand Up @@ -126,6 +128,7 @@ export const validate = async (config: Record<string, unknown>) => {
RuleEngineModule,
NotionModule,
SecretsManagerModule,
BullMqModule
],
providers: [
{
Expand Down
48 changes: 48 additions & 0 deletions services/workflows-service/src/bull-mq/bull-mq.module.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
import { Module } from '@nestjs/common';
import { ConfigModule } from '@nestjs/config';
import { BullModule } from '@nestjs/bullmq';
import { BullBoardModule } from '@bull-board/nestjs';
import { ExpressAdapter } from '@bull-board/express';
import { BullAdapter } from '@bull-board/api/bullAdapter';
import { env } from '@/env';
import { WebhookService } from '@/bull-mq/webhook/webhook.service';
import { AppLoggerService } from '@/common/app-logger/app-logger.service';
import { WebhookProcessor } from '@/bull-mq/webhook/webhook.processor';
import { AppLoggerModule } from '@/common/app-logger/app-logger.module';

const QUEUE_NAMES = [
{
name: 'webhook-queue',
config: {},
},
];

@Module({
imports: [
AppLoggerModule,
ConfigModule.forRoot({ isGlobal: true }),
BullModule.forRootAsync({
useFactory: () => ({
connection: {
password: env.REDIS_PASSWORD,
host: env.REDIS_HOST,
port: env.REDIS_PORT,
},
}),
}),
BullModule.registerQueue(...QUEUE_NAMES.map(({ name }) => ({ name }))),
BullBoardModule.forRoot({
route: '/queues',
adapter: ExpressAdapter,
}),
...QUEUE_NAMES.map(({ name }) =>
BullBoardModule.forFeature({
name,
adapter: BullAdapter,
}),
),
],
providers: [WebhookProcessor, WebhookService, AppLoggerService],
exports: [BullModule, WebhookService],
})
export class BullMqModule {}
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
import { InjectQueue, Processor, OnQueueEvent } from '@nestjs/bullmq';
import { Job, Queue } from 'bullmq';
import axios, { AxiosRequestConfig, Method } from 'axios';
import { AppLoggerService } from '@/common/app-logger/app-logger.service';

export interface WebhookJobData {
url: string;
method: Method;
headers?: Record<string, string>;
body?: Record<string, unknown>;
timeout?: number;
}

@Processor('webhook-queue')
export class WebhookProcessor {
constructor(
protected readonly logger: AppLoggerService,
@InjectQueue('webhook-queue') private webhookQueue: Queue,
) {}

async process(job: Job<WebhookJobData>) {
this.logger.log(`Processing webhook job ${job.id}`);

const { url, method, headers, body, timeout } = job.data;

const config: AxiosRequestConfig = {
url,
method,
headers,
data: body,
timeout: timeout || 5000, // Default timeout of 5 seconds
};

try {
const response = await axios(config);

return {
status: response.status,
statusText: response.statusText,
headers: response.headers,
data: response.data,
};
} catch (error) {
if (axios.isAxiosError(error)) {
throw new Error(`Webhook request failed: ${error.message}`);
}

throw error;
}
}

@OnQueueEvent('completed')
onCompleted(job: Job) {
this.logger.log(`Webhook job ${job.id} completed successfully`);
}

@OnQueueEvent('failed')
onFailed(job: Job, error: Error) {
this.logger.error(`Webhook job ${job.id} failed: ${error.message}`);
}

async onModuleInit() {
this.webhookQueue.on('cleaned', (jobs, type) => {
this.logger.log(`${jobs.length} ${type} jobs have been cleaned from the webhook queue`);
});
}
}
13 changes: 13 additions & 0 deletions services/workflows-service/src/bull-mq/webhook/webhook.service.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
import { Injectable } from '@nestjs/common';
import { InjectQueue } from '@nestjs/bullmq';
import { Queue } from 'bullmq';
import { WebhookJobData } from './webhook.processor'; // You might want to move this interface to a separate file

@Injectable()
export class WebhookService {
constructor(@InjectQueue('webhook-queue') private webhookQueue: Queue) {}

async addWebhookJob(data: WebhookJobData) {
await this.webhookQueue.add('webhook', data);
}
}
3 changes: 3 additions & 0 deletions services/workflows-service/src/env.ts
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,9 @@ export const serverEnvSchema = {
IN_MEMORIES_SECRET_ACQUIRER_ID: z.string().optional(),
IN_MEMORIES_SECRET_PRIVATE_KEY: z.string().optional(),
IN_MEMORIES_SECRET_CONSUMER_KEY: z.string().optional(),
REDIS_HOST: z.string(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance the REDIS_HOST configuration

While adding REDIS_HOST is a good start, consider improving it based on previous suggestions:

  1. Add a description for better maintainability.
  2. Consider adding validation to ensure it's a valid hostname or IP address.

Here's a suggested implementation:

REDIS_HOST: z.string().describe('Redis server hostname or IP address'),

If you want to add stricter validation, you could use a regex pattern:

REDIS_HOST: z
  .string()
  .regex(/^(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\-]*[A-Za-z0-9])$/)
  .describe('Redis server hostname or IP address'),

This regex pattern allows for valid hostnames and IP addresses.

REDIS_PASSWORD: z.string(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make REDIS_PASSWORD optional and add a description

The current implementation makes REDIS_PASSWORD mandatory, which might not be suitable for all Redis configurations. Consider the following improvements:

  1. Make the password optional.
  2. Add a description for better maintainability.

Here's a suggested implementation:

REDIS_PASSWORD: z.string().optional().describe('Redis server password (if required)'),

This change allows for Redis configurations without authentication while still supporting those that require a password.

REDIS_PORT: z.string().transform(value => Number(value)),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance REDIS_PORT configuration with validation

The current implementation transforms the port to a number but lacks validation. Consider adding the following improvements:

  1. Add a description for better maintainability.
  2. Validate that the port is within a valid range (e.g., 1024-65535).
  3. Provide a default value if needed.

Here's a suggested implementation:

REDIS_PORT: z
  .string()
  .default('6379')
  .transform(value => {
    const port = Number(value);
    if (isNaN(port) || port < 1024 || port > 65535) {
      throw new Error('REDIS_PORT must be a number between 1024 and 65535');
    }
    return port;
  })
  .describe('Redis server port'),

This implementation adds a description, includes port number validation, and provides a default value of 6379 (the standard Redis port).

};

if (!process.env['ENVIRONMENT_NAME'] || process.env['ENVIRONMENT_NAME'] === 'local') {
Expand Down