Skip to content

Objection Knex Node Express App with Docker Deployment to AWS AMI Linux instance practice

Notifications You must be signed in to change notification settings

Mugilan-Codes/objection-knex-demo

Repository files navigation

Objection ORM using Knex on MySql

Practice setup for objection knex with docker

TODO

  • Remove babel and use .mjs file extension instead of .js
  • Add classes support (@babel/plugin-transform-classes)
  • Add Versioning Support for API's
  • add Port 80(http) and Port 443(https) support using nginx
  • REVIEW: do wee need wait-for.sh in production?

Source

Pre-Requisite

  • Make the wait-for.sh script executable

    chmod +x wait-for.sh
  • Modify Docker Compose command in the node-app service

    # ./wait-for.sh <wait-for-service-name>:<port-of-the-service> -- <commands-to-execute-after>
    
    command: ./wait-for.sh mysql:3306 -- npm run dev

Knex Setup

  • Init

    knex init --cwd ./src/db
  • Migrations

    knex --esm migrate:make --cwd ./src/db <migrations_name>
  • Seeds

    knex --esm seed:make --cwd ./src/db <seed_name>
  • IMPORTANT: Login into docker and run migrate and seed

DOCKER COMMANDS

  • Image

    • List images

      docker image ls
    • Remove one or more images

      docker image rm <image_name>
  • Container

    • List Running

      docker ps
    • List All

      docker ps -a
    • Remove one or more containers

      docker image rm <image_name>
      
      # force
      docker image rm <image_name> -f
      
      # volumes
      docker image rm <image_name> -v
      
      # force and volume
      docker image rm <image_name> -fv

      NOTE:

      1. -f or --force: Force the removal of a running container (uses SIGKILL)
      2. -v or --volumes: Remove anonymous volumes associated with the container
  • Volumes

    • List volumes

      docker volume ls
    • Remove all unused local volumes

      docker volume prune
  • Access File System

    • use sh or ash since bash is unavailable in alpine images

      docker exec -it <container_name> ash
      
      # as root user
      docker exec -it --user root <container_name> ash

      NOTE:

      1. Run a command in a running container
      2. -i or --interactive: Keep STDIN open even if not attached
      3. -t or --tty: Allocate a pseudo-TTY
    • check the set environment variables inside the docker container

      printenv
  • Compose

    • DEVELOPMENT

      • up

        docker-compose up -d
        
        # use this if there is any changes in Dockerfile to Build images before starting containers
        docker-compose up -d --build
        
        # re-build image without downing the container and re creating anonymous volumes
        docker-compose up -d --build -V
        
        # scale the number of instances
        docker-compose up -d --scale node-app=2

        NOTE: -d or --detach: Detached mode: Run containers in the background

      • down

        docker-compose down
        
        # Remove containers and it's volumes (don't use it if you want db to persist)
        docker-compose down -v
        
        # Remove all images used by any service
        docker-compose down --rmi all
        
        # Remove only images that don't have a custom tag set by the `image` field
        docker-compose down --rmi local

        NOTE: -v or --volumes: Remove named volumes declared in the volumes section of the Compose file and anonymous volumes attached to containers

    • PRODUCTION

      • up

        docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
        
        # rebuild images
        docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d --build
      • down

        docker-compose -f docker-compose.yml -f docker-compose.prod.yml down -v
        
        # don't remove volumes
        docker-compose -f docker-compose.yml -f docker-compose.prod.yml down

    NOTE: can also use docker compose instead of docker-compose

  • Database

    • MySQL

      • Open MySQL (recommended)

        docker exec -it <db_container_name> bash
        
        mysql -u <user_name> -p
        # enter your password
        
        use <db_name>
      • directly login into mysql

        # open mysql
        docker exec -it <db_container_name> mysql -u <user_name> --password=<password>
        
        # directly open the database
        docker exec -it <db_container_name> mysql -u <user_name> --password=<password> <db_name>
    • redis

      • open redis

        docker exec -it <redis_container_name> redis-cli
      • View Session keys inside redis-cli

        KEYS *
      • Get Session Details by using the session id got from KEYS *

        GET <session_key>
  • Cleaning

    If you want a fresh start for everything, run docker system prune -a and docker volume prune. The first command removes any unused containers and the second removes any unused volumes. I recommend doing this fairly often since Docker likes to stash everything away causing the gigabytes to add up.

Production

  1. Launch an server on cloud (use digital ocean or aws). I am using AWS.

    • Add ubuntu in AWS EC2 instance (i chose t2.small).

    • Select Free Tier

    • Add security group for HTTP(80) and HTTPS(443) and SSH(22)

    • click Review and Launch

    • Add Tags if you want Key=Name and Value=App

    • Create key file and store it in a secure location for ssh access

    • Launch Instance

    • Wait for instance status to be running and copy the Public IP address.

    • Go to the location of the downloaded key file and open the terminal.

    • type in the command to get access to the cloud instance of the ubuntu server. (ubuntu/ec2-user user is created by default)

      ssh -i <key-file-name>.<extension> ubuntu@<public_ip>
      
      # if using AMI instance
      ssh -i <key-file-name>.<extension> ec2-user@<public_ip>

      NOTE: based on the file extension (.pem or .cer) we may need to giv it special permissions using chmod 600 <key-file-name>.<extension>. run the above command again to get access to the ubuntu instance

    • Update Ubuntu (Optional)

      # check updates available
      sudo apt list --upgradable
      
      # Update the repository index and install the updates for Kernel and installed applications
      sudo apt update && sudo apt upgrade -y
      
      # run this once the update is finished
      sudo reboot

      NOTE: After rebooting wait for sometime and connect into the ubuntu instance using ssh

  2. Add Deploy Keys to get repository access inside the server (work even for private repository)

    • Generate SSH key inside server

      cd .ssh/
      
      ssh-keygen -t ed25519 -C "[email protected]"
    • Copy public key from id_*.pub and paste it into deploy keys section of the github repo.

  3. Install Docker in the Ubuntu Instance

    • get docker engine community from the scripts

      curl -fsSL https://get.docker.com -o get-docker.sh
      
      sh get-docker.sh
    • Install docker, git (when using AMI instance)

      sudo yum install -y docker git
      
      sudo service docker start
      sudo usermod -a -G docker ec2-user
      
      # Make docker auto-start
      sudo chkconfig docker on
      
      # Reboot to verify it all loads fine on its own.
      sudo reboot
    • get docker-compose from official documentation for linux

      # check the docs for version before using this command
      sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
      
      # get latest version
      sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
      
      sudo chmod +x /usr/local/bin/docker-compose
    • Manage Docker as a non-root user or Run the Docker daemon as a non-root user (Rootless mode)

  4. Create .env file inside server

    • Open a .env file using vim

      vim .env
    • Add environmental variables

      NODE_ENV=production
      MYSQL_ROOT_PASSWORD=
      MYSQL_DATABASE=
      MYSQL_USER=
      MYSQL_PASSWORD=
      SESSION_SECRET=

      NOTE: NODE_ENV=production is not needed since it is set with dockerfile, but adding it even though

    • Modify .profile to load .env

      vim .profile
      # Add this at the bottom
      
      set -o allexport; source $HOME/.env; set +o allexport 

      NOTE: use $HOME (or) $(pwd) (or) $PWD (or) absolute path

    • check existing environmental variables

      printenv
    • Exit and relogin again for the changes to take effect

  5. Create a folder for the code and clone it (ssh)

    mkdir app
    
    cd app
    
    git clone [email protected]:Mugilan-Codes/objection-knex-demo.git .
  6. Run docker production command

    docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
    • Run migrations inside the node-app container

      docker exec -it app_node-app_1 ash
      
      npm run migrate:prod
    • Check in mysql container if the migrations where successfull

      docker exec -it app_mysql_1 mysql -u <MYSQL_USER> --password=<MYSQL_PASSWORD>
      select database();
      
      show databases;
      
      use <MYSQL_DATABASE>;
      
      select database();
      
      show tables;
      
      desc <table_name>;
  7. Make calls to the API from anywhere in the world

    http://<PUBLIC_IPV4_ADDRESS/PUBLIC_IPV4_DNS>/api/v1
  8. Workflow

    • Make changes to src and push it to github

    • cd app in production server and git pull the new changes

    • Build the new image in production server

      docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d --build
      
      # we know that there will be changes only in the node app so we can do this instead 
      docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d --build node-app
      
      # do the above thing but without rebuilding the dependencies (depends_on)
      docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d --build --no-deps node-app
      
      # force rebuild containers even when there is no change without dependecies
      docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d --force-recreate --no-deps node-app
    • Use a cloud repo to store the built images (DockerHub or amazon's ECR or something else..). Create a repository there.

      • Tag the image with respect to the name on the remote image repo that was created. (<username>/<repo_name>)

        docker image tag <local_image_name>:<version> <username>/<repo_name>
        
        docker image tag objection-knex_node-app mugilancodes/objection-knex-node-app

        NOTE: if version is not provided it defaults to latest

      • Push the tagged image to remote repo

        docker push <username>/<repo_name>
        
        docker push mugilancodes/objection-knex-node-app
      • Update docker-compose.yml file to use this image using git push

      NOTE: Do these in the local development machine

    • Pull in the changes using git pull and run the containers again in production server to tag the images

      docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
    • How to make changes reflect in production server?

      1. In Develoment Machine

        • Build the custom images in local development machine

          docker-compose -f docker-compose.yml -f docker-compose.prod.yml build
          
          # only specific service
          docker-compose -f docker-compose.yml -f docker-compose.prod.yml build node-app
        • Push the built images to cloud image repo

          docker-compose -f docker-compose.yml -f docker-compose.prod.yml push
          
          # only specific service
          docker-compose -f docker-compose.yml -f docker-compose.prod.yml push node-app
      2. In Production Server

        • Pull the changes from cloud repo into the production server

          docker-compose -f docker-compose.yml -f docker-compose.prod.yml pull
          
          # only specific image
          docker-compose -f docker-compose.yml -f docker-compose.prod.yml pull node-app
        • Update the changes

          docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
          
          # specific rebuild
          docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d --no-deps node-app

        NOTE: use watchtower to automate these steps in production server

  9. Orchestrator (kubernetes or docker swarm)

    • Check if docker swarm is active in production server (Swarm: active)

      docker info
    • Activate Swarm

      • Get public ip (eth0 --> inet)

        ip add
      • Initialize swarm using the public ip

        docker swarm init --advertise-addr <public_ip>
    • Add Nodes to Swarm

      • Manager

        docker swarm join-token manager
      • Worker

        docker swarm join --token <token_provided> <ip>:<port>
        
        # retrieve the join command for the worker
        docker swarm join-token worker
    • Update compose file for swarm deployment and push it to github

    • Pull in the changes made to production docker compose into production server. Tear down the running containers to prepare for docker stack deploy

    • Deploy (you can choose any name for the Stack instead of myapp)

      docker stack deploy -c docker-compose.yml -c docker-compose.prod.yml myapp
      • check how many nodes are running

        docker node ls
      • check how many stacks are there

        docker stack ls
      • list the services in the stack

        docker stack services myapp
      • list all the services across all stacks

        docker service ls
      • list the tasks in the stack

        docker stack ps myapp