Skip to content

Latest commit

 

History

History
98 lines (65 loc) · 2.88 KB

README.md

File metadata and controls

98 lines (65 loc) · 2.88 KB

Hippo 🦛

Hippo Screenshot

Hippo is a fairly basic implementation of a Kanban project board. It supports having multiple projects, per project multiple lanes and per lane multiple cards. Cards can be dragged-and-dropped to be rearranged within the same lane, or across lanes. Lanes on their turn can be dragged to be rearranged.

Stack

The front-end is built with:

Back-end is built using

Why Hippo?

Hippo is built as an excercise project for me to get familliar with the tech stack as described above.

Running Hippo

The easiest way to take Hippo for a spin is by using the Docker image as described below. However should you want to tinker with its insides, I'd recommend to run the parts individually on your local machine.

Running with Docker

by far the easiest way to toy with this little project is by running with Docker. Either by building from scratch (including its Postgres dependency) or running the pre-baked Docker image from Docker hub.

Build and run Docker from source using Docker Compose

  git clone [email protected]:Tmw/Hippo.git
  cd hippo
  docker-compose up

From Dockerhub

docker run \
  -e DB_HOST=localhost \
  -e DB_USER=hippo \
  -e DB_PASS=hippo \
  -e DB_NAME=hippo \
  hippo_hippo-app

Note this approach expects you to setup and link a Postgresql database yourself as configured in the environment variables. The docker container will run its migrations against the provided database on startup.

Dunning on your machine

Requirements

  • Elixir v1.9.1
  • PostgresQL
  • Node v10.x

Getting started

To run Hippo front-end on your local machine

  git clone [email protected]:Tmw/Hippo.git
  cd hippo
  cd hippo-frontend && npm start

Run the back-end

  cd hippo-backend
  mix deps.get
  mix ecto.create
  mix ecto.migrate
  iex -S mix phx.server

Extracting updated Fragment / Union types

In order for Apollo to resolve Fragments and Unions into their concrete types, it needs a little guidance from our back-end schema. Writing updates of the schema to disk can be done by running:

npm run get_fragment_types

The extracted fragment_types will be written to src/graphql/fragmentTypes.json

License

MIT