GraphQL based API that has the abilities to Create, Read, Update, & Delete Locations & Events. Query and find all the locations & events belonging to an organisation, as well as the reverse: being able to query a location(s) / event(s) and having the ability to find the organisation it belongs to.
The application uses Apollo and Fastify as its main graphql and web server. Alongside is MongoDB to store data, and a Redis cache to speed up and cache responses.
Application container orchestration is provided by Kubernetes, using traefik as the HTTP loadbalancer. Cert-Manager CRDs are provided to provision HTTPS certificates with letsencrypt. For local instances of Redis and Mongo, a development docker compose config is also provided.
Infrastructure as Code (IaC) if written using terraform to provision an environment on AWS using Amazon EKS to host the kubernetes cluster.
To run locally
- Install node modules
npm i
- Set up
.env
file - you can just copy the.env.example
file for now - Run
docker compose up
to start a mongodb container and redis container - Run
npm run db:seed
to seed the database with documents - Start the application with
npm run dev
and visit localhost:3000/graphql to query the application
Alternatively, the command npm run build
builds a production distribution.
This project includes a kubernetes deployment in the k8s/
directory.
The manifests require cert-manager and traefik to be installed (with helm if you'd like), and secrets for docker-registry authentication and application environment variables.
A helper script is supplied, ./k8s/deploy.sh [-f .env -r username:password]
for deploying the cluster locally by;
- Installing resource dependencies via helm (traefik and cert-manager)
- Ensuring secrets are defined (for container registry authentication and .env definitions).
- The .env secret can be created/updated by passing through
-f .env
. - The docker registry secret can be created/updated by passing through
-r username:password
- The .env secret can be created/updated by passing through
- Applies the kubernetes resources to your cluster
After deployment your resources can access locally using https://localhost
Note: If deploying this cluster locally, and are using the redis and mongo development containers provided in the docker compose file - swap out the hostnames from localhost
to host.docker.internal
in your .env file!
To fetch the container from the repository kubernetes must be authenticated with the github container registry with their username and github token.
Run the following command with your credentials:
kubectl create secret docker-registry ghcr-drinkataco \
--docker-server=ghcr.io \
--docker-username="$GITHUB_USERNAME" \
--docker-password="$GHCR_TOKEN"
The container requires several environment variables to be set (as described in the example file).
This file is sourced from a secret in the kubernetes deployment.
kubectl create secret generic app-env \
--from-env-file=.env
These kubernetes resources use kustomize for declarative management of resources.
Different patches are supplied for local, dev, and prod environments - the former offering a self-signed certificate, and the latter provisioning them with letsencrypt.
Terraform IaC is located in the ./tf directory. To deploy you must:
- Have an AWS profile set up in your local session
- Initialise default variables (take a peak at ./tf/variables.tf to see what you can set) in a file ./terraform.tfvars.json. A selection of values:
k8s_docker_registry
must be set to authorise with github to pull the docker containerk8s_secret_env_file
location of .env file for kubernetes (defaulted to../.env
)aws_region
region of resources (defaulted toeu-west-1
)env_name
resource prefix (defaulted toeloapi
)elasticache_enable
we can turn off provisioning a elasticache redis cluster by setting this to false
- Run
terraform init
andterraform apply
Once deployed you can use the command aws eks update-kubeconfig --name <env_name>
to update your ~/.kube/config
file with your cluster config. Then, using kubectl config set-context <cluster_arn>
you can change your context. By running kubectl get svc -A
, you can also see the DNS name of your EKS cluster, under EXTERNAL_IP on the traefik service.
TODO: Add your AWS user to the generated role (ARN returned in the eks_console_access_role
output) to view cluster information directly in the console.
- Fetch the loadbalancer EXTERNAL IP for access using
kubectl get svc -A
- you may want to use this to drop behind Route53 too! - Add your AWS user to kubernetes aws-auth config - https://veducate.co.uk/aws-console-permission-eks-cluster/ to view K8s resources in EKS
On push and tag, typescript linting (using eslint) and testing (using jest) are performed before building.
The application is deployed by tagging based on semantic versioning.
This tag triggers a release workflow and the latest packaged is released to the repository.
The API contains similar methods of access for each collection and each document. A collection can be queried with a find Query (such as findEvents
) and can be passed parameters related to pagination and ordering. The response includes the results
and meta
(which includes total documents, for example).
query FindEvents {
findEvents(limit: 10, order: { by: time, dir: desc }) {
meta {
total
limit
offset
}
results {
_id
name
time {
start
end
}
}
}
}
Similar methods for querying organisations (findOrganisations) and locations (findLocations) allso exist.
Documents are queried just by stating the document type as a field - such as event
or organisation
. They take an id
argument.
query Event {
event(id: "6343ea4a241c18e3ec46a39d") {
_id
name
description
time {
start
end
}
}
}
Furthermore, on all collection and document queries we can subquery relations. For example, on an event we could query the location (which is a one to one) by adding the organisation field. If the relationship is one to many, such as organisation to its events, we can use the findEvents query, with the same parameters listed above (such as pagination and ordering).
Mutations are formatted similarly to the collection queries, but prefaced with create, update, and delete.
Create Mutation takes parameters relating to the required fields of that object. A location, however, only requires an address, or a latlng. The missing information will be gathered and filled out from Google Maps API.
mutation newLocation {
createLocation(location: {
address: {
line1: "Alligator Lounge",
line2: "600 Metropolitan Ave",
city: "Brooklyn",
region: "New York",
postCode: "NY 112211",
country: "USA"
}
}) {
success
result {
_id
latitude
longitude
}
}
}
The repsonse above will return the correct latitude and longitude of the supplied address.
An update is similar, but must include an id
parameter. A delete only requires an id
.