Skip to content

Commit

Permalink
Merge pull request #1 from spencerhank/feature/ecs-demo
Browse files Browse the repository at this point in the history
Feature/ecs demo
  • Loading branch information
spencerhank authored Sep 13, 2024
2 parents 2d097ae + cddc6d3 commit bd400ed
Show file tree
Hide file tree
Showing 14 changed files with 565 additions and 0 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,4 @@ _site
.idea
log/
keda-demo/crd
*.tfvars
23 changes: 23 additions & 0 deletions DockerfileECSDemo
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
## DockerfileECSDemo for solace-pqdemo-subscriber
## Image to configure and exec solace-pqdemo-subscriber against Solace PubSub+ broker to showcase
## KEDA managed scalability with Solace partitioned queues

FROM openjdk:11.0.16-jdk

RUN mkdir -p /opt/partitioned-queue-demo
WORKDIR /opt/partitioned-queue-demo


# after doing ./gradlew assemble, copy all the build/staged into the Docker image
COPY build/staged ./


CMD ./bin/PQSubscriber $HOST $VPN_NAME $USERNAME $PASSWORD $QUEUE_NAME $SUB_ACK_WINDOW_SIZE


#### BUILD:
####
#### docker build -t solace-pqdemo-subscriber:latest --file DockerfileKedaDemo .
####
#### export CR_PAT=$(cat ~/.ghp_pat)
#### echo $CR_PAT | docker login ghcr.io -u USERNAME --password-stdin
124 changes: 124 additions & 0 deletions ecs-demo/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
# KEDA Demo Setup

![scaler gfx](https://github.com/SolaceLabs/pq-demo/blob/main/readme/scaler.png)

KEDA (Kubernetes Event-Driven Autoscaler) is a Kubernetes Resource that allows you to scale pods/containers/workloads based on some metric(s). One of the scalers included is for Solace PubSub+, specifically for use with Guaranteed consumer applications. This scaler monitors queue metrics (rates and depths), and scales consumers based on some configured thresholds.

This mash-up of my Partitioned Queues demo + KEDA borrowed significantly from my colleague Dennis' project here, where he went into great detail to explain the Solace PubSub+ scaler: [https://github.com/dennis-brinley/partitioned-queue-demo](https://github.com/dennis-brinley/partitioned-queue-demo). Please check it out for more detailed information.


## Step 1 - get Terraform
TODO: command to install terraform
```
cd terraform
terraform init
```


## Step 2 - Build & Deploy Docker container of PQSubscriber


From the "main" / "root" directory of this project:

```
./gradlew clean assemble
docker build -t solace-pqdemo-subscriber:latest --file DockerfileECSDemo .
```

You should now have a Docker image available to be used by the demo:

```
$ docker images | grep solace-pqdemo-subscriber
REPOSITORY TAG IMAGE ID CREATED SIZE
solace-pqdemo-subscriber latest c37025f6db50 38 minutes ago 665MB
```

Tag the image so we can deploy to a public Docker repo
```
docker image tag solace-pq-demo-subscriber:latest <username>/solace-pq-demo-subscriber:1.0
```

Push the image to docker hub
```
docker image push <username>/solace-pq-demo-subscriber:1.0
```

## Step 3 - Configure subscriber and broker connection info for ECS Deployment and SEMP Credentials for Solace Terraform
Additional Instructions
* If using a cloud event broker, you must submit a request to [email protected] requesting that they update the SEMP CORS rule to allow all origins
* If using a deployed software broker, you must allow the PQSubsriber access via security groups
* If using a local software broker, you must enable the PQSubscriber to access
```
cd terraform
cp terraform.tfvars.example terraform.tfvars
```
* Replace the entries with the appropriate values for your deployment


## Step 4 - Deploy Terraform
Prerequisites:
* Connection to AWS with sufficient permissions to create VPC, ECS, and Secrets resources (See main.tf for full list of required resources)
```
terraform apply
```
* Verify successful deployment of resources to AWS
* Verify successful deployment and configuration of queue on the solace broker

## Step 6 - Get the Solace ECS Scaler
1. Download the Solace ECS Scaler Project from https://github.com/SolaceLabs/solace-ecs-scaler
2. Run the Solace ECS Scaler by following the instructions included in the project
* Sample Config - Replace placeholders with actual values. Update maxReplicaCountTarget to match the number of partitions on your queue
```
---
brokerConfig:
activeMsgVpnSempConfig:
brokerSempUrl: <broker_semp_url>
username: <semp_admin_username>
password: <semp_admin_password
msgVpnName: <vpn_name>
pollingInterval: 10
ecsServiceConfig:
- ecsCluster: pq-demo-cluster
ecsService: pq-subscriber-service
queueName: <queue_name>
scalerBehaviorConfig:
minReplicaCount: 1
maxReplicaCount: 100
messageCountTarget: 10
messageReceiveRateTarget: 10
messageSpoolUsageTarget: 100
scaleOutConfig:
maxScaleStep: 5
cooldownPeriod: 60
stabilizationWindow: 10
scaleInConfig:
maxScaleStep: 5
cooldownPeriod: 60
stabilizationWindow: 60
```

## Step 7 - run the demo

Refer to the README in the parent directory, but essentially:
- start the Stateful Control app so that each new consumer instance added by the Solace ECS Autoscaler will have the same config
- start the Order Checker if you want to verify sequencing per-key
- configure the SLOW subscriber delay to correspond with the `messageReceiveRateTarget` threshold chosen in the Solace scaler (with some wiggle room)
- e.g. set the target rate to 90 msg/s, but adjust the SLOW subscriber delay to 10 ms to allow for approximately 100 msg/s
- start the Publisher, increase the rates, watch the scaler do its thing!

## Help




## Tear Down
TODO: steps to teardown terraform deployment
``````
cd ecs-demo/terraform
terraform destroy
``````



20 changes: 20 additions & 0 deletions ecs-demo/docker/pq-subscriber/compose/compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
#https://www.docker.com/blog/docker-compose-from-local-to-am
version: '3.8'

services:
pq-subscriber:
image: solace-pqdemo-subscriber:latest
networks:
- demo-net
environment:
# export SOLACE_HOST=solace:55555 if on same network
- HOST=${SOLACE_HOST:-localhost:55554}
- VPN_NAME=${SOLACE_VPN:-default}
- USERNAME=${SOLACE_USERNAME:-default}
- PASSWORD=${SOLACE_PASSWORD:-default}
- QUEUE_NAME=${SOLACE_QUEUE:-pq-demo}
- SUB_ACK_WINDOW_SIZE=${SUB_ACK_WINDOW_SIZE:-64}
networks:
demo-net:
external: true
name: demo-net
111 changes: 111 additions & 0 deletions ecs-demo/terraform/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
provider "aws" {
region = var.aws_region
}

# VPC and network resources
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true

tags = {
Name = "pq-demo-vpc"
}
}

resource "aws_subnet" "subnet_1" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
availability_zone = "${var.aws_region}a"
map_public_ip_on_launch = true

tags = {
Name = "pq-demo-subnet-1"
}
}

resource "aws_subnet" "subnet_2" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.2.0/24"
availability_zone = "${var.aws_region}b"
map_public_ip_on_launch = true

tags = {
Name = "pq-demo-subnet-2"
}
}

resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id

tags = {
Name = "pq-demo-igw"
}
}

resource "aws_route_table" "main" {
vpc_id = aws_vpc.main.id

route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}

tags = {
Name = "pq-demo-rt"
}
}

resource "aws_route_table_association" "subnet_1" {
subnet_id = aws_subnet.subnet_1.id
route_table_id = aws_route_table.main.id
}

resource "aws_route_table_association" "subnet_2" {
subnet_id = aws_subnet.subnet_2.id
route_table_id = aws_route_table.main.id
}

resource "aws_security_group" "pq_subscriber_sg" {
name = "pq-subscriber-sg"
description = "Security group for PQ Subscriber"
vpc_id = aws_vpc.main.id

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

# Add ingress rules as needed
}

# Call the ECS module
module "ecs" {
source = "./modules/ecs"

aws_region = var.aws_region
vpc_id = aws_vpc.main.id
vpc_cidr_block = aws_vpc.main.cidr_block
subnet_ids = [aws_subnet.subnet_1.id, aws_subnet.subnet_2.id]
security_group_ids = [aws_security_group.pq_subscriber_sg.id]
pq_subscriber_docker_image = var.pq_subscriber_docker_image
solace_vpn = var.solace_vpn
solace_username = var.solace_username
solace_queue = var.solace_queue
sub_ack_window = var.sub_ack_window
solace_host = var.solace_host
solace_password = var.solace_password
}

module "solace" {
source = "./modules/solace"
solace_semp_url = var.solace_semp_host
solace_vpn_name = var.solace_vpn
solace_semp_username = var.solace_semp_username
solace_semp_password = var.solace_semp_password
partition_count = var.solace_queue_partition_count
queue_name = var.solace_queue

}
Loading

0 comments on commit bd400ed

Please sign in to comment.