Skip to content

Latest commit

 

History

History
421 lines (311 loc) · 23.1 KB

dc-apps-performance-toolkit-user-guide-crowd.md

File metadata and controls

421 lines (311 loc) · 23.1 KB
title platform product category subcategory date
Data Center App Performance Toolkit User Guide For Crowd
platform
marketplace
devguide
build
2023-08-15

Data Center App Performance Toolkit User Guide For Crowd

This document walks you through the process of testing your app on Crowd using the Data Center App Performance Toolkit. These instructions focus on producing the required performance and scale benchmarks for your Data Center app.

In this document, we cover the use of the Data Center App Performance Toolkit on Enterprise-scale environment.

Enterprise-scale environment: Crowd Data Center environment used to generate Data Center App Performance Toolkit test results for the Marketplace approval process. Preferably, use the AWS Quick Start for Crowd Data Center with the parameters prescribed below. These parameters provision larger, more powerful infrastructure for your Crowd Data Center.

  1. Set up an enterprise-scale environment Crowd Data Center on AWS.
  2. App-specific actions development.
  3. Set up an execution environment for the toolkit.
  4. Running the test scenarios from execution environment against enterprise-scale Crowd Data Center.

1. Set up an enterprise-scale environment Crowd Data Center on k8s

EC2 CPU Limit

The installation of 4-nodes Crowd requires 16 CPU Cores. Make sure that the current EC2 CPU limit is set to higher number of CPU Cores. AWS Service Quotas service shows the limit for All Standard Spot Instance Requests. Applied quota value is the current CPU limit in the specific region.

The limit can be increased by creating AWS Support ticket. To request the limit increase fill in Amazon EC2 Limit increase request form:

Parameter Value
Limit type EC2 Instances
Severity Urgent business impacting question
Region US East (Ohio) or your specific region the product is going to be deployed in
Primary Instance Type All Standard (A, C, D, H, I, M, R, T, Z) instances
Limit Instance Limit
New limit value The needed limit of CPU Cores
Case description Give a small description of your case
Select the Contact Option and click Submit button.

Setup Crowd Data Center with an enterprise-scale dataset on k8s

Below process describes how to install Crowd DC with an enterprise-scale dataset included. This configuration was created specifically for performance testing during the DC app review process.

  1. Create access keys for IAM user. {{% warning %}} Do not use root user credentials for cluster creation. Instead, create an admin user. {{% /warning %}}

  2. Navigate to dc-app-performance-toolkit/app/util/k8s folder.

  3. Set AWS access keys created in step1 in aws_envs file:

    • AWS_ACCESS_KEY_ID
    • AWS_SECRET_ACCESS_KEY
  4. Set required variables in dcapt.tfvars file:

    • environment_name - any name for you environment, e.g. dcapt-crowd
    • products - crowd
    • crowd_license - one-liner of valid crowd license without spaces and new line symbols
    • region - Do not change default region (us-east-2). If specific region is required, contact support.
    • instance_types - ["m5.xlarge"]

    {{% note %}} New trial license could be generated on my atlassian. Use BX02-9YO1-IN86-LO5G Server ID for generation. {{% /note %}}

  5. From local terminal (Git bash terminal for Windows) start the installation (~40min):

    docker run --pull=always --env-file aws_envs \
    -v "$PWD/dcapt.tfvars:/data-center-terraform/config.tfvars" \
    -v "$PWD/.terraform:/data-center-terraform/.terraform" \
    -v "$PWD/logs:/data-center-terraform/logs" \
    -it atlassianlabs/terraform ./install.sh -c config.tfvars
  6. Copy product URL from the console output. Product url should look like http://a1234-54321.us-east-2.elb.amazonaws.com/crowd.


Data dimensions and values for an enterprise-scale dataset are listed and described in the following table.

Data dimensions Value for an enterprise-scale dataset
Users ~100 000
Groups ~15

{{% note %}} All the datasets use the standard admin/admin credentials. {{% /note %}}

Terminate Crowd Data Center

Follow Terminate development environment instructions.


{{% note %}} You are responsible for the cost of the AWS services running during the reference deployment. For more information, go to aws.amazon.com/pricing. {{% /note %}}

To reduce costs, we recommend you to keep your deployment up and running only during the performance runs.


3. App-specific actions development

Data Center App Performance Toolkit has its own set of default JMeter test actions for Crowd Data Center.

App-specific action - action (performance test) you have to develop to cover main use cases of your application. Performance test should focus on the common usage of your application and not to cover all possible functionality of your app. For example, application setup screen or other one-time use cases are out of scope of performance testing.

JMeter app-specific actions development

  1. Set up local environment for toolkit using the README.

  2. Check that crowd.yml file has correct settings of application_hostname, application_protocol, application_port, application_postfix, etc.

  3. Navigate to dc-app-performance-toolkit/app folder and run from virtualenv(as described in dc-app-performance-toolkit/README.md):

    python util/jmeter/start_jmeter_ui.py --app crowd

  4. Open Crowd thread group and add new transaction controller.

  5. Open newly added transaction controller, and add new HTTP requests (based on your app use cases) into it.

  6. Run toolkit locally from dc-app-performance-toolkit/app folder with the command
    bzt crowd.yml
    Make sure that execution is successful.


4. Setting up an execution environment

For generating performance results suitable for Marketplace approval process use dedicated execution environment. This is a separate AWS EC2 instance to run the toolkit from. Running the toolkit from a dedicated instance but not from a local machine eliminates network fluctuations and guarantees stable CPU and memory performance.

  1. Go to GitHub and create a fork of dc-app-performance-toolkit.
  2. Clone the fork locally, then edit the crowd.yml configuration file. Set enterprise-scale Crowd Data Center parameters:

{{% warning %}} Do not push to the fork real application_hostname, admin_login and admin_password values for security reasons. Instead, set those values directly in .yml file on execution environment instance. {{% /warning %}}

 application_hostname: test_crowd_instance.atlassian.com    # Crowd DC hostname without protocol and port e.g. test-crowd.atlassian.com or localhost
 application_protocol: http      # http or https
 application_port: 80            # 80, 443, 8080, 4990, etc
 secure: True                    # Set False to allow insecure connections, e.g. when using self-signed SSL certificate
 application_postfix: /crowd     # Default postfix value for TerraForm deployment url like `http://a1234-54321.us-east-2.elb.amazonaws.com/crowd`
 admin_login: admin
 admin_password: admin
 application_name: crowd
 application_password: 1111
 load_executor: jmeter            
 concurrency: 1000               # number of concurrent threads to authenticate random users
 test_duration: 45m
  1. Push your changes to the forked repository.

  2. Launch AWS EC2 instance.

    • OS: select from Quick Start Ubuntu Server 22.04 LTS.
    • Instance type: c5.2xlarge
    • Storage size: 30 GiB
  3. Connect to the instance using SSH or the AWS Systems Manager Sessions Manager.

    ssh -i path_to_pem_file ubuntu@INSTANCE_PUBLIC_IP
  4. Install Docker. Setup manage Docker as a non-root user.

  5. Clone forked repository.

You'll need to run the toolkit for each test scenario in the next section.


5. Running the test scenarios from execution environment against enterprise-scale Crowd Data Center

Using the Data Center App Performance Toolkit for Performance and scale testing your Data Center app involves two test scenarios:

Each scenario will involve multiple test runs. The following subsections explain both in greater detail.

Scenario 1: Performance regression

This scenario helps to identify basic performance issues without a need to spin up a multi-node Crowd DC. Make sure the app does not have any performance impact when it is not exercised.

Run 1 (~50 min)

To receive performance baseline results without an app installed and without app-specific actions (use code from master branch):

  1. Use SSH to connect to execution environment.

  2. Run toolkit with docker from the execution environment instance:

    cd dc-app-performance-toolkit
    docker run --pull=always --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt crowd.yml
  3. View the following main results of the run in the dc-app-performance-toolkit/app/results/crowd/YY-MM-DD-hh-mm-ss folder:

    • results_summary.log: detailed run summary
    • results.csv: aggregated .csv file with all actions and timings
    • bzt.log: logs of the Taurus tool execution
    • jmeter.*: logs of the JMeter tool execution

{{% note %}} Review results_summary.log file under artifacts dir location. Make sure that overall status is OK before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above. {{% /note %}}

Run 2

Performance results generation with the app installed (still use master branch):

  1. Run toolkit with docker from the execution environment instance:

    cd dc-app-performance-toolkit
    docker run --pull=always --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt crowd.yml

{{% note %}} Review results_summary.log file under artifacts dir location. Make sure that overall status is OK before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above. {{% /note %}}

Generating a performance regression report

To generate a performance regression report:

  1. Use SSH to connect to execution environment.
  2. Install and activate the virtualenv as described in dc-app-performance-toolkit/README.md
  3. Allow current user (for execution environment default user is ubuntu) to access Docker generated reports:
    sudo chown -R ubuntu:ubuntu /home/ubuntu/dc-app-performance-toolkit/app/results
  4. Navigate to the dc-app-performance-toolkit/app/reports_generation folder.
  5. Edit the performance_profile.yml file:
    • Under runName: "without app", in the fullPath key, insert the full path to results directory of Run 1.
    • Under runName: "with app", in the fullPath key, insert the full path to results directory of Run 2.
  6. Run the following command:
    python csv_chart_generator.py performance_profile.yml
  7. In the dc-app-performance-toolkit/app/results/reports/YY-MM-DD-hh-mm-ss folder, view the .csv file (with consolidated scenario results), the .png chart file and performance scenario summary report.

Analyzing report

Use scp command to copy report artifacts from execution env to local drive:

  1. From local machine terminal (Git bash terminal for Windows) run command:
    export EXEC_ENV_PUBLIC_IP=execution_environment_ec2_instance_public_ip
    scp -r -i path_to_exec_env_pem ubuntu@$EXEC_ENV_PUBLIC_IP:/home/ubuntu/dc-app-performance-toolkit/app/results/reports ./reports
  2. Once completed, in the ./reports folder you will be able to review the action timings with and without your app to see its impact on the performance of the instance. If you see an impact (>20%) on any action timing, we recommend taking a look into the app implementation to understand the root cause of this delta.

Scenario 2: Scalability testing

The purpose of scalability testing is to reflect the impact on the customer experience when operating across multiple nodes. For this, you have to run scale testing on your app.

For many apps and extensions to Atlassian products, there should not be a significant performance difference between operating on a single node or across many nodes in Crowd DC deployment. To demonstrate performance impacts of operating your app at scale, we recommend testing your Crowd DC app in a cluster.

Run 3 (~50 min)

To receive scalability benchmark results for one-node Crowd DC with app-specific actions:

  1. Apply app-specific code changes to a new branch of forked repo.

  2. Use SSH to connect to execution environment.

  3. Pull cloned fork repo branch with app-specific actions.

  4. Run toolkit with docker from the execution environment instance:

    cd dc-app-performance-toolkit
    docker run --pull=always --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt crowd.yml

{{% note %}} Review results_summary.log file under artifacts dir location. Make sure that overall status is OK before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above. {{% /note %}}

Run 4 (~50 min)

{{% note %}} Before scaling your DC make sure that AWS vCPU limit is not lower than needed number. Use AWS Service Quotas service to see current limit. EC2 CPU Limit section has instructions on how to increase limit if needed. {{% /note %}}

To receive scalability benchmark results for two-node Crowd DC with app-specific actions:

  1. Navigate to dc-app-performance-toolkit/app/util/k8s folder.

  2. Open dcapt.tfvars file and set crowd_replica_count value to 2.

  3. From local terminal (Git bash terminal for Windows) start scaling (~20 min):

    docker run --pull=always --env-file aws_envs \
    -v "$PWD/dcapt.tfvars:/data-center-terraform/config.tfvars" \
    -v "$PWD/.terraform:/data-center-terraform/.terraform" \
    -v "$PWD/logs:/data-center-terraform/logs" \
    -it atlassianlabs/terraform ./install.sh -c config.tfvars
  4. Use SSH to connect to execution environment.

  5. Edit run parameters for 2 nodes run. To do it, left uncommented only 2 nodes scenario parameters in crowd.yml file.

    # 1 node scenario parameters
    # ramp-up: 20s                    # time to spin all concurrent threads
    # total_actions_per_hour: 180000  # number of total JMeter actions per hour
    
    # 2 nodes scenario parameters
      ramp-up: 10s                    # time to spin all concurrent threads
      total_actions_per_hour: 360000  # number of total JMeter actions per hour
    
    # 4 nodes scenario parameters
    # ramp-up: 5s                     # time to spin all concurrent threads
    # total_actions_per_hour: 720000  # number of total JMeter actions per hour
    
  6. Run toolkit with docker from the execution environment instance:

    cd dc-app-performance-toolkit
    docker run --pull=always --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt crowd.yml

{{% note %}} Review results_summary.log file under artifacts dir location. Make sure that overall status is OK before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above. {{% /note %}}

Run 5 (~50 min)

{{% note %}} Before scaling your DC make sure that AWS vCPU limit is not lower than needed number. Use AWS Service Quotas service to see current limit. EC2 CPU Limit section has instructions on how to increase limit if needed. {{% /note %}}

To receive scalability benchmark results for four-node Crowd DC with app-specific actions:

  1. Scale your Crowd Data Center deployment to 4 nodes as described in Run 4.

  2. Edit run parameters for 4 nodes run. To do it, left uncommented only 4 nodes scenario parameters crowd.yml file.

    # 1 node scenario parameters
    # ramp-up: 20s                    # time to spin all concurrent threads
    # total_actions_per_hour: 180000  # number of total JMeter actions per hour
    
    # 2 nodes scenario parameters
    # ramp-up: 10s                    # time to spin all concurrent threads
    # total_actions_per_hour: 360000  # number of total JMeter actions per hour
    
    # 4 nodes scenario parameters
    ramp-up: 5s                     # time to spin all concurrent threads
    total_actions_per_hour: 720000  # number of total JMeter actions per hour
    
  3. Run toolkit with docker from the execution environment instance:

    cd dc-app-performance-toolkit
    docker run --pull=always --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt crowd.yml

{{% note %}} Review results_summary.log file under artifacts dir location. Make sure that overall status is OK before moving to the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above. {{% /note %}}

Generating a report for scalability scenario

To generate a scalability report:

  1. Use SSH to connect to execution environment.
  2. Allow current user (for execution environment default user is ubuntu) to access Docker generated reports:
    sudo chown -R ubuntu:ubuntu /home/ubuntu/dc-app-performance-toolkit/app/results
  3. Navigate to the dc-app-performance-toolkit/app/reports_generation folder.
  4. Edit the scale_profile.yml file:
    • For runName: "Node 1", in the fullPath key, insert the full path to results directory of Run 3.
    • For runName: "Node 2", in the fullPath key, insert the full path to results directory of Run 4.
    • For runName: "Node 4", in the fullPath key, insert the full path to results directory of Run 5.
  5. Run the following command from the activated virtualenv (as described in dc-app-performance-toolkit/README.md):
    python csv_chart_generator.py scale_profile.yml
  6. In the dc-app-performance-toolkit/app/results/reports/YY-MM-DD-hh-mm-ss folder, view the .csv file (with consolidated scenario results), the .png chart file and summary report.

Analyzing report

Use scp command to copy report artifacts from execution env to local drive:

  1. From local terminal (Git bash terminal for Windows) run command:
    export EXEC_ENV_PUBLIC_IP=execution_environment_ec2_instance_public_ip
    scp -r -i path_to_exec_env_pem ubuntu@$EXEC_ENV_PUBLIC_IP:/home/ubuntu/dc-app-performance-toolkit/app/results/reports ./reports
  2. Once completed, in the ./reports folder you will be able to review action timings on Crowd Data Center with different numbers of nodes. If you see a significant variation in any action timings between configurations, we recommend taking a look into the app implementation to understand the root cause of this delta.

{{% warning %}} After completing all your tests, delete your Crowd Data Center stacks. {{% /warning %}}

Attaching testing results to ECOHELP ticket

{{% warning %}} It is recommended to terminate an enterprise-scale environment after completing all tests. Follow Terminate development environment instructions. {{% /warning %}}

  1. Make sure you have two reports folders: one with performance profile and second with scale profile results. Each folder should have profile.csv, profile.png, profile_summary.log and profile run result archives. Archives should contain all raw data created during the run: bzt.log, selenium/jmeter/locust logs, .csv and .yml files, etc.
  2. Attach two reports folders to your ECOHELP ticket.

Support

For Terraform deploy related questions see Troubleshooting tipspage.

If the installation script fails on installing Helm release or any other reason, collect the logs, zip and share to community Slack #data-center-app-performance-toolkit channel. For instructions on how to collect detailed logs, see Collect detailed k8s logs.

In case of the above problem or any other technical questions, issues with DC Apps Performance Toolkit, contact us for support in the community Slack #data-center-app-performance-toolkit channel.