Skip to content

weimeilin79/camel-k-example-event-streaming

 
 

Repository files navigation

Camel K: Event Streaming Example

This demo uses several features from Camel K, Kafka and OpenShift to implement a system that handles different types of hazards and alert information.

Before you begin

Make sure you check-out this repository from git and open it with VSCode.

Instructions are based on VSCode Didact, so make sure it's installed from the VSCode extensions marketplace.

From the VSCode UI, right-click on the readme.didact.md file and select "Didact: Start Didact tutorial from File". A new Didact tab will be opened in VS Code.

Make sure you've opened this readme file with Didact before jumping to the next section.

Preparing the cluster

This example can be run on any OpenShift 4.3+ cluster or a local development instance (such as CRC). Ensure that you have a cluster available and login to it using the OpenShift oc command line tool.

You need to create a new project named camel-k-event-streaming for running this example. This can be done directly from the OpenShift web console or by executing the command oc new-project camel-k-event-streaming on a terminal window.

You need to install the Camel K operator in the camel-k-event-streaming project. To do so, go to the OpenShift 4.x web console, login with a cluster admin account and use the OperatorHub menu item on the left to find and install "Red Hat Integration - Camel K". You will be given the option to install it globally on the cluster or on a specific namespace. If using a specific namespace, make sure you select the camel-k-event-streaming project from the dropdown list. This completes the installation of the Camel K operator (it may take a couple of minutes).

When the operator is installed, from the OpenShift Help menu ("?") at the top of the WebConsole, you can access the "Command Line Tools" page, where you can download the "kamel" CLI, that is required for running this example. The CLI must be installed in your system path.

Refer to the "Red Hat Integration - Camel K" documentation for a more detailed explanation of the installation steps for the operator and the CLI.

Installing the AMQ Streams Operator

This example uses AMQ Streams, Red Hat's data streaming platform based on Apache Kafka. We want to install it on a new project named event-streaming-kafka-cluster.

You need to create the event-streaming-kafka-cluster project from the OpenShift web console or by executing the command oc new-project event-streaming-kafka-cluster on a terminal window.

Now, we can go to the OpenShift 4.x WebConsole page, use the OperatorHub menu item on the left hand side menu and use it to find and install "Red Hat Integration - AMQ Streams". This will install the operator and may take a couple minutes to install.

Installing the AMQ Broker Operator

The installation of the AMQ Broker follows the same isolation pattern as the AMQ Streams one. We will deploy it in a separate project and will instruct the operator to deploy a broker according to the configuration.

You need to create the event-streaming-messaging-broker project from the OpenShift web console or by executing the command oc new-project event-streaming-messaging-broker on a terminal window.

Now, we can go to the OpenShift 4.x WebConsole page, use the OperatorHub menu item on the left hand side menu and use it to find and install "Red Hat Integration - AMQ Broker". This will install the operator and may take a couple minutes to install.

Installing OpenShift Serverless

This demo also needs OpenShift Serverless (Knative) installed and working.

Refer to the OpenShift Serverless Documentation for instructions on how to install it on your cluster.

Requirements

Validate all Requirements at Once!

OpenShift CLI ("oc")

The OpenShift CLI tool ("oc") will be used to interact with the OpenShift cluster.

Check if the OpenShift CLI ("oc") is installed{.didact}

Status: unknown{#oc-requirements-status}

Connection to an OpenShift cluster

In order to execute this demo, you will need to have an OpenShift cluster with the correct access level, the ability to create projects and install operators as well as the Apache Camel K CLI installed on your local system.

Check if you're connected to an OpenShift cluster{.didact}

Status: unknown{#cluster-requirements-status}

Apache Camel K CLI ("kamel")

Apart from the support provided by the VS Code extension, you also need the Apache Camel K CLI ("kamel") in order to access all Camel K features.

Check if the Apache Camel K CLI ("kamel") is installed{.didact}

Status: unknown{#kamel-requirements-status}

Knative installed on the OpenShift cluster

The cluster also needs to have Knative installed and working.

Check if the Knative is installed{.didact}

Status: unknown{#kservice-project-check}

Optional Requirements

The following requirements are optional. They don't prevent the execution of the demo, but may make it easier to follow.

VS Code Extension Pack for Apache Camel

The VS Code Extension Pack for Apache Camel by Red Hat provides a collection of useful tools for Apache Camel K developers, such as code completion and integrated lifecycle management. They are recommended for the tutorial, but they are not required.

You can install it from the VS Code Extensions marketplace.

Check if the VS Code Extension Pack for Apache Camel by Red Hat is installed{.didact}

Status: unknown{#extension-requirement-status}

Understanding the Demo Scenario

This demo simulates a global hazard alert system. The simulator consumes data from a public APIs available on the internet as well as user-provided data that simulates different types of hazards (ie.: crime, viruses, natural hazards and others). The system consumes data from OpenAQ API to query air pollution information consume, present information and warn the user when certain incidents happen. Diagram

Understanding the Infrastructure

This demo spans over 3 namespaces in the cluster, one hosting the Streaming services, one hosting the broker, and a main namespace for all the system applications.

In the Streaming namespace, you will have 3 topics listening to the events (pollution, crime and timeline). In the Broker namespace, you will have two topics listening to the notification and alarm events.

In the main namespace, this is where you run all the Camel K microservices and fuctions.

Diagram

1. Creating the AMQ Streams Cluster

We switch to the event-streaming-kafka-cluster project to create the Kafka cluster:

oc project event-streaming-kafka-cluster

(^ execute{.didact})

The next step is to use the operator to create an AMQ Streams cluster. This can be done with the command:

oc create -f infra/kafka/clusters/event-streaming-cluster.yaml

(^ execute{.didact})

Depending on how large your OpenShift cluster is, this may take a little while to complete. Let's run this command and wait until the cluster is up and running.

oc wait kafka/event-streaming-kafka-cluster --for=condition=Ready --timeout=600s

(^ execute{.didact})

You can can check the state of the cluster by running the following command:

oc get kafkas -n event-streaming-kafka-cluster event-streaming-kafka-cluster

(^ execute{.didact})

Once the AMQ Streams cluster is created. We can proceed to the creation of the AMQ Streams topics:

oc apply -f infra/kafka/clusters/topics/

(^ execute{.didact})

oc get kafkatopics

(^ execute{.didact})

At this point, if all goes well, we should our AMQ Streams cluster up and running with several topics.

2. Creating the AMQ Broker Cluster

To switch to the event-streaming-messaging-broker project, run the following command:

oc project event-streaming-messaging-broker

(^ execute{.didact})

Having already the operator installed and running on the project, we can proceed to create the broker instance:

oc create -f infra/messaging/broker/instances/amq-broker-instance.yaml

(^ execute{.didact})

We can use the oc get activemqartermis command to check if the AMQ Broker instance is created:

oc get activemqartemises

(^ execute{.didact})

If it was successfully created, then we can create the addresses and queues required for the demo to run:

oc apply -f infra/messaging/broker/instances/addresses

(^ execute{.didact})

3. Deploying the Project

Now that the infrastructure is ready, we can go ahead and deploy the demo project. First, lets switch to the main project:

oc project camel-k-event-streaming (^ execute{.didact})

kamel install --skip-operator-setup --skip-cluster-setup --maven-repository https://maven.repository.redhat.com/ga

(^ execute{.didact})

Initial Configuration

Most of the components of the demo use use the ./config/application.properties{.didact} to read the configurations they need to run. This file already comes with expected defaults, so no action should be needed.

Optional: Configuration Adjustments

Note: you can skip this step if you don't want to adjust the configuration

In case you need to adjust the configuration, the following 2 commands present information that will be required to configure the deployment:

oc get services -n event-streaming-messaging-broker

(^ execute{.didact})

oc get services -n event-streaming-kafka-cluster

(^ execute{.didact})

They provide the addresses of the services running on the cluster and can be used to fill in the values on the properties file.

We start by opening the file ./config/application.properties{.didact} and editing the parameters. The content needs to be adjusted to point to the correct addresses of the brokers. It should be similar to this:

kafka.bootstrap.address=event-streaming-kafka-cluster-kafka-bootstrap.event-streaming-kafka-cluster:9092
messaging.broker.url=tcp://broker-hdls-svc.event-streaming-messaging-broker:61616

Creating the Secret

One of the components simulates receiving data from users and, in order to do so, authenticate the users. Because we normally don't want the credentials to be easily accessible, it simulates checking the access control by reading a secret.

We can push the secret to the cluster using the following command:

oc create secret generic example-event-streaming-user-reporting --from-file config/application.properties

(^ execute{.didact})

With this configuration secret created on the cluster, we have completed the initial steps to get the demo running.

Pollution Report

Diagram

Running the OpenAQ Consumer

Now we will deploy the first component of the demo: ./openaq-consumer/OpenAQConsumer.java{.didact}

kamel run openaq-consumer/OpenAQConsumer.java model/pollution/* --name open-aq-consumer

(^ execute{.didact})

Details: this starts an integration that consumes data from the OpenAQ API, splits each record and sends them to our AMQ Stream instance. The demo addresses for the AMQ Streams broker is stored in the example-event-streaming which is inject into the demo code and used to reach the instance.

Running the Pollution Bridge

This service consumes the pollution events and sends it to the timeline topic for consumption. Also sending notification and alarms to AMQ brokers.

kamel run event-bridge/PollutionBridge.java model/common/Alert.java model/pollution/PollutionData.java --name pollution-bridge

(^ execute{.didact})

Running the Timeline Bridge

This service consumes all the events to the web Front-end for display purpose

kamel run event-bridge/TimelineBridge.java

(^ execute{.didact})

Running the Front-end

This web front end queries the timeline bridge service and displays the events collected at the time. We will use OpenShift build services to build a container with the front-end and run it on the cluster.

The front-end image leverages the official Apache Httpd 2.4 image from Red Hat's container registry.

We can proceed to creating the build configuration and starting the build within the OpenShift cluster. The following command replaces the URL for the timeline API on the Javascript code and launches an image build.

URL=$(oc get ksvc timeline-bridge -o 'jsonpath={.status.url}') ; cat ./front-end/Dockerfile| oc new-build --docker-image="registry.access.redhat.com/rhscl/httpd-24-rhel7:latest" --to=front-end --build-arg="URL=$URL" -D -

(^ execute{.didact})

With the build complete, we can go ahead and create a deployment for the front-end:

oc apply -f front-end/front-end.yaml

(^ execute{.didact})

Now, let's get the front-end URL so that we can open it on the browser. To find the public API for the service, we can run the following command:

oc get routes front-end-external -o 'jsonpath={.spec.port.targetPort}://{.spec.host}'

(^ execute{.didact})

Open this URL on the browser and we can now access the front-end, and you should be able to see the pollution now showing in the browser. If pm > 25 it will display as RED and yellow for the rest.

Diagram

Crime Report

Diagram

Running the GateKeeper

This service leverages knative eventing channels to operate. Therefore, we need to create them on the OpenShift cluster. To do so we can execute the following command:

oc apply -f infra/knative/channels/audit-channel.yaml

(^ execute{.didact})

The Gatekeeper service{.didact} simulates a service that is used to audit accesses to the system. It leverages knative support from Camel-K.

kamel run audit-gatekeeper/GateKeeper.java

(^ execute{.didact})

Running the Event Report System

The Event Report System{.didact} simulates a service that is used to receive user-generated reports on the system. It receives events sent by the user and sends them to the AMQ Streams instance. To run this component execute the following command:

kamel run user-report-system/EventReportSystem.java model/common/Data.java --name event-report-system

(^ execute{.didact})

Running the Crime Bridge

kamel run event-bridge/CrimeBridge.java model/common/* --name crime-bridge

(^ execute{.didact})

Report a crime incident from cloud event

Now that we launched all the services, let's check the state of our integrations. We can do so with the command:

URL=$(oc get ksvc event-report-system -o 'jsonpath={.status.url}')

(^ execute{.didact})

curl --location --request POST $URL --header 'ce-specversion: 1.0' --header 'ce-source: police-center' --header 'ce-type: hazard-event' --header 'ce-id: crime-123abc' --header 'content-type: application/json' --header 'Content-Type: application/json' --data-raw '{ "user": { "name": "user1" }, "report": { "type": "crime", "alert": "true", "measurement": "crime", "location": "20 W 34th St, New York" } }'

(^ execute{.didact})

Diagram

4. Uninstall

To cleanup everything, execute the following command:

oc delete project camel-k-event-streaming event-streaming-messaging-broker event-streaming-kafka-cluster

(^ execute{.didact})

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • CSS 89.8%
  • Java 5.7%
  • Gherkin 3.3%
  • Other 1.2%