A RESTful API for processing healthcare claims, built with Node.js, Express.js, and PostgreSQL. This API features comprehensive monitoring, logging, and alerting mechanisms.
- Claim Processing: Submit and retrieve healthcare claims
- JWT Authentication: Secure API endpoints
- Robust Logging: Structured logs for debugging and auditing
- Metrics Monitoring: Prometheus integration for real-time system metrics
- Visualization: Grafana dashboards for monitoring system health
- Docker Support: Containerized application and database
- Rate Limiting: Protection against API abuse
- Security Headers: Enhanced API security
- Backend: Node.js, Express.js
- Database: PostgreSQL
- Authentication: JWT
- Logging: Winston
- Monitoring: Prometheus, Grafana
- Containerization: Docker, Docker Compose
- Node.js (v14 or higher)
- Docker and Docker Compose (for containerized setup)
- PostgreSQL (if running locally)
Create a .env
file in the root directory with the following variables:
DB_NAME=claim_db
DB_USER=postgres
DB_PASSWORD=postgres
DB_HOST=localhost
JWT_SECRET=your_secret_key
PORT=5000
NODE_ENV=development
-
Install dependencies:
npm install
-
Start PostgreSQL database (if not using Docker)
-
Run the application:
npm run dev
-
Build and start the containers:
docker-compose up -d
-
The API will be available at
http://localhost:5000
-
Prometheus will be available at
http://localhost:9090
-
Grafana will be available at
http://localhost:3000
All endpoints require a valid JWT token in the Authorization header:
Authorization: Bearer <your-token>
POST /claims
Request body:
{
"payer": "Insurance Company",
"amount": 500.00,
"procedure_codes": ["P1", "P2"]
}
GET /claims/:id
Returns the claim details for the specified ID.
GET /claims/status/:id
Returns the current status of the claim.
Logs are stored in the logs
directory with the following files:
combined.log
: All logserror.log
: Error logs onlyaccess.log
: HTTP request logsexceptions.log
: Uncaught exceptionsrejections.log
: Unhandled promise rejections
Metrics are exposed at the /metrics
endpoint in Prometheus format.
Key metrics include:
- HTTP request duration
- Claim processing duration
- Database query performance
- System metrics (CPU, memory)
- Request counts and error rates
Prometheus can be configured with alerting rules to notify about system issues:
- High error rates
- Slow response times
- System resource constraints
Several strategies can be used for log storage:
-
Local File System: Simple but not scalable
- Pros: Easy to set up, good for development
- Cons: Limited storage, not suitable for distributed systems
-
Centralized Logging Service: (ELK Stack or similar)
- Pros: Searchable, supports high volumes, visualization
- Cons: Requires additional infrastructure, more complex
-
Cloud-based Logging: (AWS CloudWatch, GCP Logging)
- Pros: Managed service, scalable, integrated with cloud services
- Cons: Vendor lock-in, potential costs
-
Log Aggregation Tools: (Fluentd, Logstash)
- Pros: Flexible, supports multiple destinations
- Cons: Requires configuration and maintenance
The recommended approach is to use Elastic Stack (Elasticsearch, Logstash, Kibana) for production environments, which offers powerful search capabilities and visualization tools for log analysis.
The repository includes a GitHub Actions workflow for CI/CD:
- Runs linting and tests
- Builds Docker image
- Publishes image to container registry
- Deploys to target environment
Pipeline stages:
- Build: Compile code and create artifacts
- Test: Run unit and integration tests
- Scan: Security and vulnerability scanning
- Package: Build container images
- Deploy: Push to target environment