Skip to content

alex-504/devops-test-master

Repository files navigation

Beer Catalog DevOps Assignment

Project Overview

This project demonstrates a complete DevOps workflow for a Python web application, containerized with Docker, deployed on AWS using Terraform, and automated with GitHub Actions. The solution follows 12-factor app and SOLID principles, with a focus on security, automation, and documentation.


Architecture

  • Flask App (Dockerized, venv for local dev)
  • PostgreSQL (AWS RDS)
  • ECS Fargate (App hosting)
  • ECR (Docker image registry)
  • VPC, Subnets, Security Groups (Terraform-managed)
  • CloudWatch (Logging & Alarms)
  • GitHub Actions (CI/CD)

Architecture Diagram


Local Development

1. Clone the repository

git clone [email protected]:alex-504/devops-test-master.git
cd devops-test-master/app/beer_catalog

2. Set up a virtual environment and install Poetry

python3 -m venv venv
source venv/bin/activate
pip install poetry

3. Install dependencies with Poetry

poetry install

4. Run the app with SQLite (quick start, no setup needed)

export DATABASE_URL="sqlite:///beers.db"
poetry run python -m flask --app beer_catalog/app run --debug

5. Run the app with PostgreSQL

Prerequisites:

  • PostgreSQL installed (e.g., brew install postgresql@14 on macOS)
  • Start PostgreSQL (macOS/Homebrew):
    brew services start postgresql@14
    # or for other versions:
    # brew services start postgresql
  • Check status:
    brew services list
  • Create the database (if not already created):
    createdb beer_catalog
    • If you see an error like database "beer_catalog" already exists, you can skip this step.

Set the DATABASE_URL environment variable:

  • The default user is usually your macOS username (e.g., alexandrevieira).
  • If you use a password, add it after the username: postgresql://<user>:<password>@localhost:5432/beer_catalog
export DATABASE_URL="postgresql://<user>@localhost:5432/beer_catalog"

Run the app:

poetry run python -m flask --app beer_catalog/app run --debug

Testing Endpoints

  • Health check:
    curl http://127.0.0.1:5000/health
  • Get all beers:
    curl http://127.0.0.1:5000/beers
  • Add a beer:
    curl -X POST http://127.0.0.1:5000/beers \
    -H "Content-Type: application/json" \
    -d '{"name": "Heineken", "style": "Lager", "abv": 5.0}'

---

## Running the App with Docker

You can run the app locally using Docker, just like in production (ECS). This ensures consistency and lets you test the container before deploying.

### 1. Build the Docker image
```sh
docker build -t beer-catalog-app .

2. Run the app with SQLite (no extra setup needed)

docker run -p 5000:5000 beer-catalog-app

3. Run the app with PostgreSQL (macOS/Windows)

If you want to use your local PostgreSQL database with Docker:

docker run -p 5000:5000 -e DATABASE_URL="postgresql://<user>@host.docker.internal:5432/beer_catalog" beer-catalog-app
  • Replace <user> with your Postgres username.
  • If you use a password, add it after the username.
  • The app will be available at http://localhost:5000.

Note: This is the same Docker image that is pushed to ECR and used by ECS in production, ensuring consistency between local and cloud environments.


☁️ Deploying to AWS

Note: An example terraform.tfvars.example file is provided in the terraform/ directory.
Copy it to a new file called terraform.tfvars and update the values with your own secrets and settings before running terraform apply.

1. Build & Push Docker Image

  • Automated via GitHub Actions on push to main, master, feature/* branches.

2. Provision Infrastructure

cd terraform
terraform init
terraform plan
terraform apply
  • All AWS resources (ECR, ECS, RDS, VPC, etc.) are created from scratch (no prebuilt modules).

3. App Access

  • App will be available at the ECS public IP (see ECS task details in AWS Console).

CI/CD Pipeline (workflows/docker-ecr.yml)

  • Pull Request Checks: Linting and (optional) tests on every PR.
  • Docker Build & Push: On merge to main, master, feature/* branches, image is built and pushed to ECR.
  • ECS Deployment: ECS service is updated with the new image.
  • Push image to ECR: On push, the new image is built and pushed to ECR.
  • Deploy app to ECS: ECS service is updated with the new image.

The workflows are define on .github/workflows/ folder.


Terraform Infrastructure

  • ECR repository for Docker images
  • ECS cluster & service for app hosting
  • RDS PostgreSQL instance
  • VPC, subnets, security groups (built from scratch)
  • CloudWatch log group & alarms
  • No prebuilt modules used for ECS or networking

API Endpoints

Method Endpoint Description
GET /health Health check
GET /beers List all beers
POST /beers Add a new beer
POST /seed Seed database

Note: No /beers/<id> endpoint as per the original app.


Best Practices & Gotchas

  • Intentional Issues: Documented and fixed in ISSUES_FOUND.md.
  • 12-factor & SOLID: Environment variables, logging, error handling, and code structure.
  • Security: IAM roles, least privilege, no hardcoded secrets.
  • Naming & Structure: Consistent resource names, clear separation of concerns.

Monitoring & Logging

  • CloudWatch Logs: ECS task logs
  • CloudWatch Alarms: RDS high CPU, ECS task failures

Screenshots of AWS Console

(All rerouces were created from scratch and provisioned using Terraform)

AWS Console:

App endpoints tested via 'curl'

  • health Check: curl http://3.27.247.99:5000/health -> screenshot
  • get all beers: curl http://3.27.247.99:5000/beers -> screenshot
  • add a beer: curl -X POST http://3.27.247.99:5000/beers -H "Content-Type: application/json" -d '{"name": "Heineken", "style": "Lager", "abv": 5.0}' -> screenshot
  • ❌ seed the database: curl -X POST http://3.27.247.99:5000/seed -> screenshot

CI/CD pipeline runs


Troubleshooting & Common Issues

  • Region Mismatch: Ensure AWS CLI and Console are set to ap-southeast-2.
  • Resource Already Exists: Delete or import orphaned resources.
  • App Not Responding: Check ECS task status, security groups, and logs.
  • Terraform State Issues: Use terraform import or clean up resources as needed.

Evaluation Points

  • Infrastructure design: Built for scalability and team collaboration.
  • Naming & documentation: Clear, standardized, and recruiter-friendly.
  • CI/CD pipeline: Automated, reliable, and secure.
  • Resilience & security: Follows AWS and DevOps best practices.
  • Troubleshooting: All intentional issues found and documented.

Bonus Features (status)

  • [✓] RDS user/permission automation. Refer to aws_db_instance, aws_db_user, aws_db_parameter_group, or aws_db_role
  • [✓] Secret management. Refer to terraform.tfvars
  • ECS Service Auto Scaling. I did not have time to implement this since it required very specific configuration on AWS and loadbalancer.

Next Steps / Improvements

  • Add /beers/<id> endpoint
  • Add authentication/authorization
  • Use Terraform modules for larger projects
  • Add automated integration tests
  • ECS Service Auto Scaling
  • Add ECR Lifecycle policy: to restrict the number of images in the repository. It would limit the growth of images in the repository.

Minor Personal Notes

  • Time Investment notes: ~2h to complete the full AWS Deployment.
  • I would like to have tested more the CI/CD pipeline.
  • Cost optimization: I created a budget Status on AWS Console, to avoid unexpected costs.
  • I would like to have added a deletion protection on RDS instance (deletion_protection = var.environment == "prod" ? true : false), maybe next time.

Contact

Alexandre Vieira
[https://www.linkedin.com/in/alexandre-dev/]

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published