DISCLAIMER: This tutorial is currently under development by the Dynatrace Innovation Lab! If you have any questions please get in contact with me (@grabnerandi)!
Goal: Update your current "Deployment Automation" to automatically rollout Dynatrace Fullstack monitoring across your infrastructure. Every host, app and service that gets deployed will automatically - without any additional configuration - deliver key metrics for the individual stakeholders: dev, architects, operations and business
In order to achieve that goal we will learn how to
- Automate installation of OneAgent, e.g: via CloudFormation, SaltStack, Terraform ...
- Pass additional meta data to your hosts and processes via hostautotag.conf, DT_TAGS, DT_CUSTOM_PROP ...
- Ensure proper PG (process group), PGI (process group instance) and service detection via detection rules
- Push additional deployment & configuration change event details to Dynatrace to enrich the AI's data set
This tutorial comes with an AWS CloudFormation template that will create an EC2 Linux instance but comes with a set of options to simulate different "stages" of our "Monitoring as a Service" maturity level:
- Install OneAgent? If YES - will automatically rollout OneAgent on the EC2 machine
- Install Dynatrace CLI? If YES - will download Dynatrace CLI in a dockerized version for local use
- Download this GitHub repo? If YES - will download all content of this GitHub repo (only works if this GitHub repo is publicly accessible)
The GitHub repo itself includes the following additional components
- Node.js-based Web Application called "frontend-app"
- Node.js-based Backend Service called "backend-service"
- NGINX Load Balancer in front of 1 or multiple "frontend-app"s
- JMeter load generation script
To walk through all labs of this tutorial you will need
- An AWS Account: Dont have one? Register via http://aws.amazon.com
- A Dynatrace SaaS Account: Dont have one? Register via http://bit.ly/dtsaastrial
- Clone or download this GitHub repo to your local machine
- In AWS you need an EC2 KeyPair in your default region
- In Dynatrace you need to create an API Token
- In Dynatrace setup AWS CloudWatch Monitoring
- FORK or IMPORT https://github.com/dynatrace-innovationlab/dynatrace-monitoring-as-service.git into your own GitHub proejct if you want to walk through Labs 3 and beyond! These require some code modifications that we will commit back to GitHub
Best Practice: For most of the labs we need to provide the following input values:
- Dynatrace SaaS Url
- Dynatrace OneAgent Download URL
- Dynatrace API Token
- Your GitHub Repo link! Make sure you have these values at hand, e.g: copy them into a text file so they are easily accessible
ATTENTION: You can skip this lab in case you want to go straight to installing OneAgents. You should still read through the lab description as it explains how to build and launch the individual parts of our application.
Step 1: Create Stack
- Login to your AWS Account
- Make sure you have an AWS EC2 KeyPair
- Go to CloudFormation Stack and create a new stack with the name "DTMaaSLab1" by uploading DynatraceMonitoringAsAServiceCFStack.json
- Select "Yes" on "Step 0: Download dynatrace-monitoring-as-service Github" (keep the rest NO)
- If you have created your own GitHub repo then provide the link - otherwise go with the default
- Fill in the 3 Dynatrace values even though not needed for this workshop
- Walk until the end of the wizard - make sure to select "I acknolwedge ..." on the last step before creating the stack!
Step 2: Launch app
- SSH into machine
- ./run_frontend2builds_clustered.sh
This will run one instance of the backend-service, 4 instances of the frontend-service (2 running Build #1, 2 running Build #2) as well as the frontend NGINX-based load balancer.
Step 3: Browse the app
- Open your browser and navigate to the public IP or DNS of your EC2 machine. You should see the web interface and can interact with its services!
Step 4: Learn how to redeploy app
- ./stop_frontend_clustered.sh will STOP ALL components (load balancer, frontend & backend)
- ./stop_docker.sh in subdirectories allow for stopping all tiers independantly.
- ./frontend-app/run_docker.sh allows for launching a specific build with certain instance count and bind port
- ./frontend-loadbalancer/run_docker.sh allows to launch nginx for a number of available frontend-app instances
Before launching frontend, backend or frontend-loadbalancer have a look at the run_docker.sh files for usage information!
Summary: What we have! We have a distributed application that we can access. As for monitoring: The only level of monitoring you have in the moment is through the Dynatrace AWS CloudWatch integration which gives you basic infrastructure metrics for your EC2 instance!
In this lab we learn how to automate the installation of the Dynatrace OneAgent, how to pass in additional meta data about the host and how OneAgent automatically monitors all our processes, containers, services, applications and end-users! We also learn about tagging and how to organize your entities. We highly recommend to read Best practices on organize monitored entities
Step 1: Create stack
- Either keep or delete the old stack from Lab 1
- Create a new stack based on the same CF Template and call it "DTMaaSLab2"
- Select YES to install OneAgent and Dynatrace CLI
- Keep the rest the same as before
Step 2: Launch app: Just as in Lab1
Step 3: Browse the app: Just as in Lab 1
Step 4: Execute some load We have a JMeter script ready that executes constant load against the app. Here are the steps to kick it off:
- cd jmeter-as-container
- ./build_docker.sh
- ./quicklaunch.sh , e.g: ./quicklaunch.sh ec2-11-222-33-44.compute-1.amazonaws.com
This executes the script scripts/SampleNodeJsServiceTest.jmx. It simulates 10 concurrent users and the test will run until you call "./stop.test.sh". Once stopped you get the JMeter Result Dashboard in the local results.zip!
Step 5: Explore automated monitoring result and how it works!
Here are a couple of things that happened "automagically" due to the auto installation of OneAgent through this part of the CloudFormation Script.
How to install the OneAgent? How to pass hostautotag.conf? Lets first look at the script so you can replicate this in your own scripts (CloudFormation, Terraform, Saltstack ...)
"wget --no-check-certificate -O Dynatrace-OneAgent-Linux.sh \"",
{
"Ref": "DynatraceOneAgentURL"
},
"\"\n",
"echo 'Ready to install & configure tags for OneAgent:'\n",
"mkdir -p /var/lib/dynatrace/oneagent/agent/config\n",
"cd /var/lib/dynatrace/oneagent/agent/config\n",
"echo \"MaaSHost StackName=",
{
"Ref" : "AWS::StackName"
},
" ",
{
"Ref" : "DynatraceCustomHostTags"
},
"\" > hostautotag.conf\n",
"cd /\n",
"sudo /bin/sh Dynatrace-OneAgent-Linux.sh APP_LOG_CONTENT_ACCESS=1\n"
This will result in an automated monitored host that should look simliar to this - including all tags from hostautotag.conf and all AWS Tags that came in through the AWS CloudWatch Integration:
How were the individual processes detected? How about Process Groups? By default Dynatrace groups similiar processes into a Process Group. In our case we will get a Process Group (PG) for each individual Docker Image, e.g: frontend-app, backend-app, frontend-loadbalancer as this is the default behavior!
If we run multiple process or docker instances of the same process or container image, Dynatrace will group them all into a single Process Group Instance (PGI). In our case that means that we will see ONE PGI for frontend-app, ONE for backend-app and ONE for frontend-loadbalancer. The fact that we have multiple instances of the same container on the same host doesnt give us individual PGIs. That is the default behavior! We have ways to change that behavior through Process Group Detection rules or by using some of the DT_ environment variables. We will use this later one to get different PGIs for the different simulated builds of our frontend service, e.g: PGI for Build 1, Build 2, ... - for now we go with the default!
Lab Lessons Learned
- Deploying OneAgent will automatically enable FullStack Monitoring
- hostautotag.conf will automatically push custom tags to the host entity
- Process Group (PG) and Process Group Instance (PGI) are automatically detected for each docker image
In this lab we learn how which meta-data is captured automatically, how to pass custom meta data and how we can use this meta data to influence process group detection and automated tagging!
The OneAgent automatically captures a lot of meta data for each process which will be propagated to the Process Group Instance and the Process Group itself, e.g: Technology, JVM Version, Docker Image, Kubernetes pod names, service version number, ...
Add custom meta data: We can add additional meta data to every processes via the environment variable DT_CUSTOM_PROP, DT_TAGS, ...
Which additional meta data should we pass? It depends on your environment but here are some ideas, e.g: Build Number, Version Number, Team Ownership, Type of Service, ...
Using Meta Data (How and Use Cases): We can use custom and existing meta data from, e.g: Java Properties, Environment Variables or Process Properties to influence Process Group Detection as well as Rule-based Tagging!
Step 1: Pass meta data via custom environment variables
- Edit frontend-app/run_docker.sh
- Change the comments to use the launch process labeled Step 1 (make sure the other lines are commented)
- Lets restart our app via ../stop_frontend_clustered.sh and then ../run_frontend2builds_clustered.sh
Looking at our Process Groups now shows us the additional Meta Data and the Automated Tags!
Step 2: Influence PGI Detection to detect each Build as separate PGI
- Edit frontend-app/run_docker.sh
- Change the comments to use the launch process labeled Step 2 (make sure the other lines are commented)
- Lets restart our app via ../stop_frontend_clustered.sh and then ../run_frontend2builds_clustered.sh
The difference with this launch process is that we pass the BUILD_NUMBER as DT_NODE_ID. This changes the default Process Group Instance detection mechanism and every docker instance, even if it comes from the same docker image, will be split into its own PGI. Note: Kubernetes, OpenShift, CloudFoundry, ...: For these platforms the OneAgent automatically detects containers running in different pods, spaces or projects. There should be no need to leverage DT_NODE_ID to separate your container instances.
Lab Lesson Learned
- How to pass additional meta data to a process group
- How to influence process group and process group instance detection
In this lab we learn how to automatically apply tags on service level. This allows you to query service-level metrics (Respone Time, Failure Rate, Throughput, ...) automatically based on meta data that you have passed during a deployment, e.g: Service-Type (Frontend, Backend, ...), Deployment Stage (Dev, Test, Staging, Prod ...)
In order to tag services we leverage Automated Service Tag Rules. In our lab we want Dynatrace create a new Service-level TAG with the name "SERVICE_TYPE". It should only apply the tag IF the underlying Process Group has the custom meta data property "SERVICE_TYPE". If that is the case we also want to take that value and apply it as the tag value for "Service_Type".
Step 1: Create Service tag rule
- Go to Settings -> Tags -> Automatically applied tags
- Create a new Tag with the name "SERVICE_TYPE"
- Edit that tag and create a new rule 3.1. Rule applies to Services 3.2. Optional tag value: {ProcessGroup:Environment:SERVICE_TYPE} 3.3. Condition on "Process group properties -> SERVICE_TYPE" if "exists"
- Click on Preview to validate rule works
- Click on Save for the rule and then "Done"
Here is the screenshot that shows that rule definition!
Step 2: Search for Services with Tag It will take about 30s until the tags are automatically applied to the services. So - lets test it
- Go to Transaction & services
- Click in "Filtered by" edit field
- Select "ServiceType" and select "Frontend"
- You should see your service! Open it up!
Step 3: Create Tagging Rule for Environment Define a Service-level tagging rule for a tag called "Environment". Extract the Tag Value from the Process Group's Environment value "Enviornment". Make sure to only apply this rule if ProcessGroup:Environment exists!
Lab Lesson Learned
- Create automated tagging rules to apply tags to services
Passing meta data is one way to enrich the meta data in Smartscape and the automated PG, PGI and Service detection and tagging. Additionally to meta data we can also push deployment and configuration changes events to these Dynatrace Entities.
The Dynatrace Event API provides a way to either push a Custom Annotation or a Custom Deployment Event to a list of entities or entities that match certain tags. More on the Dynatrace Event API can be found here.
The Dynatrace CLI also implements a dtcli evt push option as well as an option that is part of "Monitoring as Code" (Monspec). This is what we are going to use in our lab. We already have a pre-configured monspec.json file available that contains the definition of how our host can be identified in Dynatrace.
Step 1: Push host deployment information
- cat ./monspec/monspec.json
- Explore the entry "MaaSHost". You will see that it contains sub elements that define how to detect that host in Dynatrace in our Lab2
- Execute ./pushhostdeploy.sh
- Open the host details view in Dynatrace and check the events view
What just happened? The Dynatrace CLI was called with the monspec and the pipelineinfo.json as parameter. One additional parameter was MaaSHost/Lab2. This told the CLI to lookup this configuration section in monspec.json and then push a custom deployment event to those Dynatrace HOST entities that have that particular tag (Environment=MaaSHost) on it. Such an event in Dynatrace can have an arbritrary list of name/value pair properties. The CLI automatically pushes some of the information from monspec, e.g: Owner as well as some information in the pipelineinfo.json file to Dynatrace!
Step 2: Push service deployment information
- cat ./monspec/monspec.json
- Explore the entry "FrontendApp". You will see similiar data as for our host. But now its a SERVICE and we use our SERVICE_TYPE tag to identify it
- Execute ./pushservicedeploy.sh
- Open the service details view for your FrontendApp service
Lab Lesson Learned
- The concept of "Monitoring as Code" monspec.json file
- Push deployment information to entities defined in monspec.json
Management Zones allow us to define who is going to see and who has access to what type of FullStack data. There are many ways to slice your environment - and it will depend on your organizational structure & processes. In our tutorial we can assume that we have the following teams
- a Frontend and a Backend Team (responsible for any Node.js services)
- a Dev Team responsible for the whole Development Environment
- an Architecture Team responsible for Development & Staging
- an Operations Team responsible for all Infrastructure (=all Hosts)
- a Business Team responsible for all applications
Lets create Management Zones that will give each team access to the data they are supposed to see!
Step 1: Create Management Zone for Frontend & Backend
- Go to Settings -> Preferences -> Management Zones
- Create a Zone named "Frontend Services"
- Add a new rule for "Services"
- Define a condition for SERVICE_TYPE=FRONTEND
- Select "Apply to underlying process groups of matching services"
- Add a second rule for SERVICE_TYPE=BACKEND
- Save and test the Management Zone in the Smartscape View
Step 2: Create Management Zone for Dev Team Create a Zone that shows ALL entities that are tagged with Environment=Development
Step 3: Create Management Zone for Architect Team Create a Zone that shows ALL entities that are tagged with Environment=Development or Environment=Staging
Step 4: Create Management Zone for Operations Create a Zone for all HOSTS & Processes.
Step 5: Create Management Zone for Business Create a Zone that covers all Web Applications
Lab Lesson Learned
- Management Zones allow us to create filter on all fullstack monitored data
- Management Zones can also be used for access control!
The Dynatrace REST API provides easy access to Smartscape (=Topology Data) as well as Timeseries (=Performance Metrics). Basically everything we see in the Dynatrace UI can be queried and accessed via the API. This allows us to answer questions such as
- Whats the response time of my backend-service?
- How many users access our application through the browser?
- On how many PGIs does our fronent-app service run on?
- How many service dependencies does our frontend-app have?
Monitoring as Code To make querying of key metrics of certain entities and services easier we extend our "Monitoring as Code" (=monspec.json) with a list of key timeseries and smartscape metrics that we want to analyze. In monspec.json we can define these metrics in the "perfsignature" (Performance Signature) config element Our tutorial already has some of these metrics pre-defined for our host and frontend-app service.
Monspec in Dynatrace CLI The Dynatrace CLI implements a couple of use cases to pull, compare or even pull & push data for one of our monitored entities in a specific environment. In this tutorial we have a helper shell script called pullmonspec.sh which we can use to pull all "perfsignature" metrics for a specific Entity in a certain Environment, e.g: MaaSHost/Lab2 would pull in all metrics defined for MaaSHost and will do it for the Host runs in Lab2 Environment!
Step 1: Explore perfsignature list in monspec.json
- cat /monspec/monspec.json
- Explore the list of metrics and see how to define new ones
Step 2: Pull these metrics from Dynatrace via the Dynatrace-CLI
- ./pullmonspec.sh MaaSHost/Lab2
- ./pullmonspec.sh FrontendApp/Dev
Lab Lesson Learned
- How "Monitoring as Code" monspec.json allows us to define key performance & smartscape metrics (=Performance Signature)
- Pull the Performance Signature metrics in an automated way
Now as we have everything correctly setup and configured for our first environment lets do the same thing for a second enviornment like this
Step 1: Create second enviornment, e.g: Staging
- Create a new CF Stack based on the same CF Template
- Create a new stack based on the same CF Template and call it "DTMaaSLab8"
- Select YES to install OneAgent and Dynatrace CLI
- For "DynatraceCustomHostProperties" specify "Environment=Staging"
- Lets create the stack and explore what we see in Dynatrace
Lab Lesson Learned
- With all automation in place Dynatrace correctly tags all entities
- Dashboards and Management Zones work based on Infrastructure as Code Meta Data
The last lab is about setting up your problem notification integration with your ChatOps or other incident management tools.
Additionally to automated Real User Monitoring (RUM) where Dynatrace automatically alerts on problems impacting real user experience we can also setup specific synthetic tests to validate availability and performance for our key business transaction and service endpoints.
Step 1: Create Synthetic Monitor for Staging
- Go to Synthetic
- Create a new Synthetic Test for our Staging Web Site
- Validate Synthetic Test is producing results
- Shutdown Staging Frontend-Loadbalancer to simulate an Availability Issue
- Create a Notification Integration that sends an alert to Business
- Restart Staging Frontend-Loadbalancer
- Simulate another issue and validate notification works!
Lab Lesson Learned
- Dynatrace can alert Business on Availabilty or Performance Issues with key business transactions
Dynatrace provides rich dashboarding capabilities. As a last step we want to create default dashboards for development, business and operations.