I’ve created a user profile management service that exposes multiple endpoints, as requested. This project illustrates how microservices can be designed to handle a specific domain (in this case, user profiles) while maintaining flexibility, scalability, and separation of concerns. It follows key principles of microservice architecture:
- Independence: The service is independent, meaning it can be easily replaced or scaled horizontally by running multiple instances without affecting the overall system.
- Scalability: The system can be scaled in or out based on demand, allowing flexibility in handling high traffic.
- Stateless: The service maintains stateless interactions. Each request is independent and doesn't rely on a server maintaining previous request information.
- Service Communication: Services are designed to communicate asynchronously via RabbitMQ, showcasing how microservices can integrate for event-driven architectures.
Below, it's included the procedures to run the application showing these concepts in action.
The service exposes several REST API endpoints that meet the specified requirements:
-
Get Specific User by ID:
- Endpoint:
GET localhost:8080/v1/api/users/{id}
- This endpoint fetches a specific user by their unique ID, including all associated information.
- Endpoint:
-
Search Users by Date Range:
- Endpoint:
GET localhost:8080/v1/api/users/search/by-date?startDate=2025-02-01&endDate=2025-02-28
- This endpoint returns a list of users created between a specified date range.
- Endpoint:
-
Search Users by Profession:
- Endpoint:
GET localhost:8080/v1/api/users/search/by-profession?profession=doctor
- This returns a list of users with a specific profession.
- Endpoint:
-
Custom Endpoint for CRUD Operations:
POST localhost:8080/v1/api/users
– Create a new user.PUT localhost:8080/v1/api/users
– Update an existing user.DELETE localhost:8080/v1/api/users/{id}
– Delete an existing user.
The API is exposed ath localhost:8080/swagger-ui/index.html
. Additionally, I’ve provided a postman_collection.json to easily test and interact with all the endpoints.
-
Asynchronous Event Handling:
- When a new user is created, a message is sent to a RabbitMQ topic. Other services in the system can subscribe to this message for real-time processing. This showcases inter-service communication using event-driven architecture. If you are running the application using the container approach, you can visit
localhost:15672
user: guest, pass: guest and check the queue messages right after a new user is created. - Technologies used: Spring Boot, RabbitMQ, Spring Cloud Streams, and AOP.
- When a new user is created, a message is sent to a RabbitMQ topic. Other services in the system can subscribe to this message for real-time processing. This showcases inter-service communication using event-driven architecture. If you are running the application using the container approach, you can visit
-
Data Validation:
- The objects received by the endpoints, mainly the ones for creation or update, uses Spring Boot validation annotations to ensure correct data formats.
-
Data Layer Separation:
- The code is organized into different layers: Controller, Service, and DAO layers. This improves maintainability by separating the concerns.
-
Unit Testing:
- Unit tests have been implemented to ensure the business logic works correctly and that the service performs as expected.
-
Environment Configuration:
- The application currently runs in the with basic default and dev profile, but it would be beneficial to have separate profiles for testing and production to ensure proper configuration management across different environments.
-
CSV as a Data Source:
- While using a CSV file as a data source is a temporary solution for this project (mainly because of the time of challenging), it’s not ideal for a production system. The lack of support for transactions (ACID properties) in CSVs could lead to data consistency issues. A relational database (e.g., PostgreSQL) or NoSQL database (e.g., MongoDB) would be more appropriate for persistent data.
- I used OpenCSV to integrate with the CSV, and while it handles concurrency, it's not ideal for high-volume or real-time data handling.
-
Security:
- The current system doesn't include any security mechanisms like user authentication or authorization. This could be handled by Spring Security or delegated to another service in the system . Implementing security would be a very important next step to ensure access to sensitive information.
-
Monitoring and Observability:
- For a production-ready service, it's crucial to integrate monitoring and observability tools such as Prometheus, Grafana. This would help monitor the service's health, collect metrics, and provide insight into the performance of the system.
-
Fault Tolerance:
- While the current setup is straightforward, implementing mechanisms such as retries, circuit breakers, and failover strategies would make the service more resilient in case of failures.
To run the service using Docker Compose, follow these steps:
- Install Docker Desktop https://docs.docker.com/desktop/
- Ensure Java JDK 17 or higher is installed.
- Clone the project and navigate to the project folder.
- Build the Docker image by running
./build-docker-image.sh
. - Go to the
cd docker-compose
folder and rundocker compose up -d
to start:- Three instances of the microservice.
- One instance of NGINX to simulate an API Gateway.
- One instance of RabbitMQ
- You can scale the service up or down with Docker Compose using
docker compose up --scale magusers=5 -d
ordocker compose up --scale magusers=1 -d
.- After scaling services up and down you must reload nginx or wait for the heathchecker identify that the servers are on or off. Nginx dont have a service discovery or advance features to recognized changes fast.
- After restart, you can check the url
localhost:8080/v1/api/users/info
. This is a custom endpoint to show which server are you connected with. - The heathcheck url is
localhost:8080/actuator/health
.
- To stop the services, run
docker compose down
.
If you prefer to run the service without Docker, you can run it in standalone mode using your favorite IDE. In this case, you don't need RabbitMQ or the load balancer, but the same endpoints and Postman collection file will still work. You can also use the command mvn spring-boot:run
to start the service locally.
This project provides a great opportunity to showcase the fundamental concepts of microservices, such as scalability, statelessness, communication between services, and independent deployment. It’s a simple yet effective demonstration of how to build and manage microservices.
The use of a CSV file as a data source was the most limiting aspect. While it was fine for the scope of this project, it’s not a scalable solution for production. A proper database solution would be necessary for durability, consistency, and performance.
The overall approach I followed seems solid for the task at hand, but I would have preferred a more detailed specification, especially for business logic. In the future, I would include a relational database or a more robust data storage solution. Additionally, I would integrate monitoring, observability, and security features. Using Spring Data JPA for database integration and adding versioning for APIs would also be improvements.
This project is designed to run on-premises, but in a cloud environment (e.g., AWS, GCP), many of the required components such as API Gateway, load balancing, and messaging services are readily available. If I had the opportunity, I would migrate the service to the cloud and replace certain components with native cloud services for better scalability, reliability, and cost-efficiency.