Skip to content

A HTTP load balancer and deferred request queue written in Go

License

Notifications You must be signed in to change notification settings

truong-hua/serviceq

 
 

Repository files navigation

ServiceQ

ServiceQ is a TCP layer for parallel HTTP service deployments. It distributes load across multiple endpoints and buffer requests on error (in scenarios of downtimes, service unavailability, connection loss etc). The buffered requests are forwarded in FIFO order when the service is available next.

Noticeable features -

  • HTTP Load Balancing
  • Request retries with configurable interval
  • Failed request buffering and deferred forwarding
  • Heuristic Error Feedback+Round Robin selection
  • Concurrent connections limit
  • Customizable balancer properties
  • Error based response

Until I make serviceq available as a package download, here are the steps to run the setup -

Warm-up

Clone the project into any directory in your workspace (say 'serviceq/src')

$ git clone https://github.com/gptankit/serviceq/

Make sure GOPATH is pointing to serviceq directory
Change into directory serviceq/src

How to Build

$ make ('make build' will also work)

Optional: make with debug symbols removed (~25% size reduction)

$ make build-nodbg

This will create a Go binary serviceq in the current directory

How to Install

Make sure the current user has root privileges, then -

$ make install

This will create a folder serviceq in /opt directory and copy the generated serviceq binary to /opt/serviceq and sq.properties file (load balancer configuration) to /opt/serviceq/config.

How to Run

Before installing, make sure the mandatory configurations in sq.properties are set (LISTENER_PORT, PROTO, ENDPOINTS, CONCURRENCY_PEAK) -

#sq.properties

#Port on which serviceq listens on
LISTENER_PORT=5252

#Protocol the endpoints listens on -- 'http' for both http/https
PROTO=http

#Endpoints seperated by comma (,) -- no spaces allowed, can be a combination of http/https
ENDPOINTS=https://api.server0.com:8000,https://api.server1.com:8001,https://api.server2.com:8002

#Concurrency peak defines how many max concurrent connections are allowed to the cluster
CONCURRENCY_PEAK=2048

By default deferred queue is enabled with all methods and routes allowed. These options can be controlled as -

#Enable deferred queue for requests on final failures (cluster down)
ENABLE_DEFERRED_Q=true

#Request format allows given method/route on deferred queue -- picked up if ENABLE_DEFERRED_Q is true
#DEFERRED_Q_REQUEST_FORMATS=POST /orders,PUT,PATCH,DELETE
#DEFERRED_Q_REQUEST_FORMATS=ALL
DEFERRED_Q_REQUEST_FORMATS=POST,PUT,PATCH,DELETE

After all is set -

$ sudo /opt/serviceq/serviceq

Feel free to play around and post feedbacks

About

A HTTP load balancer and deferred request queue written in Go

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 94.4%
  • Makefile 5.6%