Skip to content

feat: Implementación Sistema de Procesamiento de Transacciones con CQRS y EDA - Kenny Luque #481

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
84 changes: 84 additions & 0 deletions README.challenge.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
# Yape Code Challenge :rocket:

Our code challenge will let you marvel us with your Jedi coding skills :smile:.

Don't forget that the proper way to submit your work is to fork the repo and create a PR :wink: ... have fun !!

- [Problem](#problem)
- [Tech Stack](#tech_stack)
- [Send us your challenge](#send_us_your_challenge)

# Problem

Every time a financial transaction is created it must be validated by our anti-fraud microservice and then the same service sends a message back to update the transaction status.
For now, we have only three transaction statuses:

<ol>
<li>pending</li>
<li>approved</li>
<li>rejected</li>
</ol>

Every transaction with a value greater than 1000 should be rejected.

```mermaid
flowchart LR
Transaction -- Save Transaction with pending Status --> transactionDatabase[(Database)]
Transaction --Send transaction Created event--> Anti-Fraud
Anti-Fraud -- Send transaction Status Approved event--> Transaction
Anti-Fraud -- Send transaction Status Rejected event--> Transaction
Transaction -- Update transaction Status event--> transactionDatabase[(Database)]
```

# Tech Stack

<ol>
<li>Node. You can use any framework you want (i.e. Nestjs with an ORM like TypeOrm or Prisma) </li>
<li>Any database</li>
<li>Kafka</li>
</ol>

We do provide a `Dockerfile` to help you get started with a dev environment.

You must have two resources:

1. Resource to create a transaction that must containt:

```json
{
"accountExternalIdDebit": "Guid",
"accountExternalIdCredit": "Guid",
"tranferTypeId": 1,
"value": 120
}
```

2. Resource to retrieve a transaction

```json
{
"transactionExternalId": "Guid",
"transactionType": {
"name": ""
},
"transactionStatus": {
"name": ""
},
"value": 120,
"createdAt": "Date"
}
```

## Optional

You can use any approach to store transaction data but you should consider that we may deal with high volume scenarios where we have a huge amount of writes and reads for the same data at the same time. How would you tackle this requirement?

You can use Graphql;

# Send us your challenge

When you finish your challenge, after forking a repository, you **must** open a pull request to our repository. There are no limitations to the implementation, you can follow the programming paradigm, modularization, and style that you feel is the most appropriate solution.

If you have any questions, please let us know.


123 changes: 62 additions & 61 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,82 +1,83 @@
# Yape Code Challenge :rocket:
# Transaction Microservices Project (CQRS/EDA)

Our code challenge will let you marvel us with your Jedi coding skills :smile:.
This repository contains a sample microservices-based system for managing financial transactions, implementing CQRS and EDA patterns using NestJS, Kafka, Debezium, PostgreSQL, Redis, and GraphQL.

Don't forget that the proper way to submit your work is to fork the repo and create a PR :wink: ... have fun !!
## Overview

- [Problem](#problem)
- [Tech Stack](#tech_stack)
- [Send us your challenge](#send_us_your_challenge)
The system is composed of:

# Problem
* **Transaction Service:** Handles the creation and querying of transactions GraphQL API and HTTP REST API. Uses PostgreSQL as the Write DB and Redis as the Read Cache.
* **AntiFraud Service:** Validates transactions based on simple rules (event-driven via Kafka).
* **Infrastructure:** PostgreSQL, Redis, Kafka, Zookeeper, Kafka Connect with Debezium (managed via Docker Compose).

Every time a financial transaction is created it must be validated by our anti-fraud microservice and then the same service sends a message back to update the transaction status.
For now, we have only three transaction statuses:
### Handling High Volume and Concurrency

<ol>
<li>pending</li>
<li>approved</li>
<li>rejected</li>
</ol>
To address high-volume scenarios with concurrent reads and writes, this architecture implements the following key strategies:

Every transaction with a value greater than 1000 should be rejected.
* **CQRS (Command Query Responsibility Segregation) Pattern:**
* **Writes (Commands):** Target a PostgreSQL database, optimized for transactional integrity (ACID).
* **Reads (Queries):** Served exclusively from an in-memory Redis cache, optimized for very high speed and low latency.
* This separation allows optimizing and scaling read and write workloads independently, reducing contention.

```mermaid
flowchart LR
Transaction -- Save Transaction with pending Status --> transactionDatabase[(Database)]
Transaction --Send transaction Created event--> Anti-Fraud
Anti-Fraud -- Send transaction Status Approved event--> Transaction
Anti-Fraud -- Send transaction Status Rejected event--> Transaction
Transaction -- Update transaction Status event--> transactionDatabase[(Database)]
```
* **Asynchronous and Event-Driven Processing:**
* The Redis cache (Read Model) is updated asynchronously via events consumed from Kafka (originating from Debezium and the AntiFraud service).
* This decouples write/update operations from reads, improving responsiveness and resilience.
* **Eventual consistency** is achieved.

# Tech Stack
* **Independent Scalability:**
* **Reads (Redis):** Can be scaled horizontally using Redis Cluster.
* **Writes (PostgreSQL):** Can be scaled vertically or horizontally using techniques like partitioning or sharding.
* **Event Processing (Kafka/Consumers):** Can be scaled by increasing Kafka topic partitions and the number of consumer microservice instances (Transaction Service, AntiFraud Service).

<ol>
<li>Node. You can use any framework you want (i.e. Nestjs with an ORM like TypeOrm or Prisma) </li>
<li>Any database</li>
<li>Kafka</li>
</ol>
## Prerequisites

We do provide a `Dockerfile` to help you get started with a dev environment.
Make sure you have the following installed on your local machine:

You must have two resources:
* Git
* Docker ([https://docs.docker.com/get-docker/](https://docs.docker.com/get-docker/))
* Docker Compose ([https://docs.docker.com/compose/install/](https://docs.docker.com/compose/install/))
* Node.js (v20+ or the version specified in `.nvmrc` / `package.json`)
* pnpm (`npm install -g pnpm` or your preferred method)

1. Resource to create a transaction that must containt:

```json
{
"accountExternalIdDebit": "Guid",
"accountExternalIdCredit": "Guid",
"tranferTypeId": 1,
"value": 120
}
```

2. Resource to retrieve a transaction

```json
{
"transactionExternalId": "Guid",
"transactionType": {
"name": ""
},
"transactionStatus": {
"name": ""
},
"value": 120,
"createdAt": "Date"
}
```
## Architecture Diagram

Below is the architecture diagram for the system:

![Architecture Diagram](./assets/architecture-diagram.png)


## Optional
## 🚀 Quick Start (Local Development)

You can use any approach to store transaction data but you should consider that we may deal with high volume scenarios where we have a huge amount of writes and reads for the same data at the same time. How would you tackle this requirement?
Follow these steps to spin up the entire environment:

## 1. Clone the repository (replace with your actual repository URL)
git clone <URL_DEL_REPOSITORIO>
cd <NOMBRE_DEL_REPOSITORIO>

## 2. Configure Environment Variables
MANUALLY: Copy/create the necessary .env files from .env.example files:
- transaction_service/.env
- antifraud_service/.env
- docker/.env
Adjust settings inside these files if needed (ports, credentials, Kafka topics, etc.).

## 4. Navigate to the Docker directory
cd docker

## 5. Build images and start all services
```bash
docker compose build --no-cache && docker compose up
```

You can use Graphql;
## 6. Wait for Kafka Connect to be ready (can take 30s to a few minutes)

# Send us your challenge
## 7. Register the Debezium connector
./scripts/init-debezium.sh

When you finish your challenge, after forking a repository, you **must** open a pull request to our repository. There are no limitations to the implementation, you can follow the programming paradigm, modularization, and style that you feel is the most appropriate solution.
## 8. Verify Connectors
curl http://localhost:8083/connector-plugins | jq

If you have any questions, please let us know.
## 9. Check status
docker ps
40 changes: 40 additions & 0 deletions antifraud_service/.dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Dependencies
node_modules
npm-debug.log
yarn-debug.log
yarn-error.log

# Build output
build

# Environment files
.env
.env.*
!.env.example

# IDE files
.idea
.vscode
*.swp
*.swo

# OS files
.DS_Store
Thumbs.db

# Test coverage
coverage

# Logs
logs
*.log

# Temporary files
tmp
temp

# Docker specific
Dockerfile*
docker-compose*
.docker
.dockerignore
22 changes: 22 additions & 0 deletions antifraud_service/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# database
POSTGRES_HOST=postgres_db
POSTGRES_PORT=5432
POSTGRES_USER=user
POSTGRES_PASSWORD=password
POSTGRES_DB=transactions
DATABASE_URL=postgresql://user:password@postgres_db:5432/transactions

# Redis
REDIS_HOST=redis
REDIS_PORT=6379

# Node environment
NODE_ENV=development

ANTIFRAUD_SERVICE_PORT=3001
ANTIFRAUD_THRESHOLD=1000

# Kafka configuration
KAFKA_BROKER=kafka:29092
KAFKA_CLIENT_ID=antifraud-service-client-id
KAFKA_CONSUMER_GROUP_ID=antifraud-debezium-consumer-group
File renamed without changes.
1 change: 1 addition & 0 deletions antifraud_service/.nvmrc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
20.18.1
4 changes: 4 additions & 0 deletions antifraud_service/.prettierrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
{
"singleQuote": true,
"trailingComma": "all"
}
25 changes: 25 additions & 0 deletions antifraud_service/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
FROM node:20-alpine3.20 AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
RUN npm install -g pnpm
WORKDIR /app
COPY package.json pnpm-lock.yaml ./
RUN pnpm install --frozen-lockfile

FROM node:20-alpine3.20 AS builder
RUN npm install -g pnpm
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN pnpm build

FROM node:20-alpine3.20 AS runner
RUN npm install -g pnpm
WORKDIR /usr/src/app

COPY package.json pnpm-lock.yaml ./
RUN pnpm install --prod

COPY --from=builder /app/dist ./dist

CMD [ "node","dist/main" ]
Loading