Skip to content

Latest commit

 

History

History
60 lines (41 loc) · 1.87 KB

container.md

File metadata and controls

60 lines (41 loc) · 1.87 KB

container

Running the latest container image from GitHub register

A new container image is automatically build when a new package version is released. The container image is available in ghcr.io/eduardocerqueira/s3-pull-processor:latest

# pull latest container image
 docker pull ghcr.io/eduardocerqueira/s3-pull-processor:latest

# run container with no arguments
docker run --name poc-s3 ghcr.io/eduardocerqueira/s3-pull-processor s3-pull-processor

see example running e2e scenario

Building your own image

# build container
sh ops/scripts/container_image_build.sh

# run container with no arguments
docker run --name poc-s3 s3-pull-processor

see example running e2e scenario

OCP/K8s

see s3-pull-processor running in k8s/Openshift

Example running from container

Simulating HOST-A uploading artifact data to S3, and HOST-B pulling data from S3 and SQS and processing it.

Notice the argument -f passed to the command to run on the HOST-A, it is to create a fake artifact file for test

# HOST-A
docker run -e "AWS_ACCESS_KEY_ID=*************" \
-e "AWS_SECRET_ACCESS_KEY=*************" \
-e "AWS_DEFAULT_REGION=*************" \
-e "AWS_S3_SECURE_CONNECTION=*************" \
--name host-a s3-pull-processor s3-pull-processor upload -p /tmp/fake-artifact-file.tar -f

# HOST-B
docker run -e "AWS_ACCESS_KEY_ID=*************" \
-e "AWS_SECRET_ACCESS_KEY=*************" \
-e "AWS_DEFAULT_REGION=*************" \
-e "AWS_S3_SECURE_CONNECTION=*************" \
--name host-b s3-pull-processor s3-pull-processor pull

demo_container

links

ghcr.io