Skip to content

Latest commit

 

History

History
138 lines (92 loc) · 8.24 KB

README.md

File metadata and controls

138 lines (92 loc) · 8.24 KB

CICD-TEMPLATE

This repo is meant to provide an initial template for teams so they can fork it and start working on it. Plug and run solution that holds an API supported by an API-gateway developed in dotnet core 3.1, NodeJs and Python. Useful for teams that look for a functional solution that requires minimum configuration.

  • Lambda (as a WebApi) developed in C# and AspNetCore 3.1.
  • Lambda developed in C# and dotnet core 3.1.
  • Lambda developed in NodeJs.
  • Lambda developed in Python.
  • ECS Service supported by Fargate developed with NodeJs.
  • API with 5 resources (each one pointing to a different service, Lambda or Fargate).
  • Infrastructure as Code using Terraform 1.x.
  • End to end tests developed in NodeJs with Jest.
  • Performance tests developed with k6.
  • Docker images are used to be able to run the web apis locally (dotnet and NodeJs) without installing a thing.

Features this repo provides

Out of the box, the following features are provided:

  • development

    Several (dummy) applications are included in the src folder. The folder includes a functional serverless api supported by 4 Lambdas and one containerized application. Lambdas are developed in dotnet-core-3.1, NodeJs, and Python and containerized application in NodeJs. In the same folder, some unit tests are included. In the root folder, you'll find 2 Dockerfiles to create artifact and run tests locally for the dotnet-web-api and NodeJs applications. Both Dockerfiles are meant to help you better develop your api locally.

  • local-load-testing

    Included in the root folder, there is a folder called local-testing. Inside, you'll find a ready-to-run script called local-testing/scripts/run-load-tests.sh that will run a docker-compose file. If you check the script, you'll see that:

    • Creates 2 containers from the applications in src/dotnet/WebApi src/nodejs/server.
    • Runs the docker-compose.yml file, which contains:
      • Both dotnet and nodejs applications exposed to particular ports.
      • A time-series database called influxdb. Which essentially allows to store and retrieve timeseries data.
      • A graphana dashboard that pulls data from influxdb.
      • A k6 service that load-tests the apis that your applications expose and stores the stats in influxdb.
    • All together produces a real-time dashboard exposed in the url http://localhost:3000/d/k6/k6-load-testing-results (don't use the url before running the script ... :P) where you can see how your apis respond.
  • continuous-integration

    The repo provides a scripts and .github/workflows folders out of the box with all the necessary scripts and workflows. This definition includes:

    • ensuring that all commits in a pull request follow conventional-commits.
    • unit-tests pass.
    • creates and uploads branch artifacts to an s3 bucket.
    • deploys the infrastructure into an aws-account.
    • runs performance tests.
    • runs e2e tests.
  • continuous-delivery/deployment

    Once a pull request is merged into the main branch, the workflow after-merge-workflow is launched. Here, the workflow

  • Tests

    • Unit testing

      Inside the src/dotnet folder there is a folder called Tests where some unit tests are developed using xUnit and Moq.

    • End to end testing

      The folder tests/e2e holds some e2e tests that represent how your client would call the api once deployed. Allows to run e2e tests against an already deployed api in an aws account.

    • Performance testing

      The folder tests/performance holds some performance tests as a .js script that "attacks" your deployed API and computes several stats about your API availability and response time.

  • clean-up infra after pull request

    A workflow called clean-up will be executed after a pull request is closed (merged or declined). This will ensure no residual infrastructure is left in your AWS account. Good way to avoid extra charges from Amazon :) ...

  • manual-infrastructure-destruction

    There is a workflow called destroy that will erase your infrastructure in the provided aws account. Use it with caution ;) ...

  • manual-deployment/configuration

    Check out the configuration branch. Readme file.

  • manual-deployment/configurations

    There is also a workflow called deploy-workflow that allows you to manually deploy a specific version to an aws-account by introducing the required inputs. Those inputs are:

    • service-version: version of the service to deploy.
    • environment: where to deploy it.
    • service-group: group to be deployed or updated.
    • s3-bucket-name: bucket where the configuration is stored.
    • s3-bucket-key: key of the tfvars file that holds the configuration.

Up-coming features

  • manual-deployment/configurations

    There is also a workflow called deploy-workflow that allows you to manually deploy a specific version to an aws-account by introducing the required inputs. Those inputs are:

    • service-version: version of the service to deploy.
    • environment: where to deploy it.
    • service-group: group to be deployed or updated.
    • s3-bucket-name: bucket where the configuration is stored.
    • s3-bucket-key: key of the tfvars file that holds the configuration.

How to set it up

This repository is (meant to be) provided as a self-contained and plug-and-run solution. Nothing external is required to make it work. To start working on your own solution follow those steps:

  • Fork (or clone) the repo to create your own.

  • You can build and run both Dockerfiles.* in the root directory. Something like:

    docker image build . --file Dockerfile.create-nodejs-server-image --tag name-of-your-choice docker run -p 80:80 name-of-your-choice You can do the same for the Dockerfile.create-dotnet-webapi-image file.

  • To start creating pull requests and see some action:

    • check out the required secrets you need on your repository:
      • AWS_ACCESS_KEY: Access key id of the github user.
      • AWS_SECRET_KEY: Secret key of the github user.
      • AWS_ACCOUNT_ID: Aws account id.
      • AWS_REGION: Aws region name.
      • BUCKET_NAME: Name of the s3 bucket where the artifacts, configurations, and Lambda state files will be stored.
    • Create an ECR repository in your aws account. Once you have the name, copy paste it in an environment variable.
  • Create a pull request from a new branch to the main branch.

    • This will trigger the continuous-integration workflow (github/workflows/ci.yml).
    • Take into account that all steps in that workflow are conditionally activated based on the folders that have been modified during the pull request.
  • Once the pull request is merged to the main branch the continuous-deployment workflow (github/workflows/cd.yml) will be activated.

Who/How to contact

How to contribute?