title | platform | product | category | subcategory | date |
---|---|---|---|---|---|
Data Center App Performance Toolkit User Guide For Bamboo |
platform |
marketplace |
devguide |
build |
2024-10-22 |
This document walks you through the process of testing your app on Bamboo using the Data Center App Performance Toolkit. These instructions focus on producing the required performance and scale benchmarks for your Data Center app.
In this document, we cover the use of the Data Center App Performance Toolkit on Enterprise-scale environment.
Enterprise-scale environment: Bamboo Data Center environment used to generate Data Center App Performance Toolkit test results for the Marketplace approval process. Preferably, use the below recommended parameters.
- Set up an enterprise-scale environment Bamboo Data Center on AWS.
- App-specific actions development.
- Setting up load configuration for Enterprise-scale runs.
- Running the test scenarios from execution environment against enterprise-scale Bamboo Data Center.
{{% warning %}} The installation of DC environment and execution pod requires at least 24 vCPU Cores. Newly created AWS account often has vCPU limit set to low numbers like 5 vCPU per region. Check your account current vCPU limit for On-Demand Standard instances by visiting AWS Service Quotas page. Applied quota value is the current CPU limit in the specific region.
Make that current region limit is large enough to deploy new cluster. The limit can be increased by using Request increase at account-level button: choose a region, set a quota value which equals a required number of CPU Cores for the installation and press Request button. Recommended limit is 30. {{% /warning %}}
Below process describes how to install Bamboo DC with an enterprise-scale dataset included. This configuration was created specifically for performance testing during the DC app review process.
-
Create Access keys for AWS CLI: {{% warning %}} Do not use
root
user credentials for cluster creation.Option 1 (simple): create admin user with
AdministratorAccess
permissions.Option 2 (complex): create granular permission policies with policy1 and policy2.
The specific configuration relies on how you manage permissions within AWS. {{% /warning %}}
Example Option 1 with Admin user:
- Go to AWS Console -> IAM service -> Users
- Create new user -> attach policies directly ->
AdministratorAccess
- Open newly created user -> Security credentials tab -> Access keys -> Create access key -> Command Line Interface (CLI) -> Create access key
- Use
Access key
andSecret access key
in aws_envs file
Example Option 2 with granular Policies:
- Go to AWS Console -> IAM service -> Policies
- Create
policy1
with json content of the policy1 file {{% warning %}} Important: change all occurrences of123456789012
to your real AWS Account ID. {{% /warning %}} - Create
policy2
with json content of the policy2 file {{% warning %}} Important: change all occurrences of123456789012
to your real AWS Account ID. {{% /warning %}} - Go to User -> Create user -> Attach policies directly -> Attach
policy1
andpolicy2
-> Click on Create user button - Open newly created user -> Security credentials tab -> Access keys -> Create access key -> Command Line Interface (CLI) -> Create access key
- Use
Access key
andSecret access key
in aws_envs file
-
Clone Data Center App Performance Toolkit locally. {{% warning %}} For annual review, always get the latest version of the DCAPT code from the master branch.
DCAPT supported versions: three latest minor version releases. {{% /warning %}}
-
Navigate to
dc-app-performance-toolkit/app/util/k8s
folder. -
Set AWS access keys created in step1 in
aws_envs
file:AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
(only for temporary creds)
-
Set required variables in
dcapt.tfvars
file:environment_name
- any name for you environment, e.g.dcapt-bamboo
products
-bamboo
bamboo_license
- one-liner of valid bamboo license without spaces and new line symbolsregion
- Do not change default region (us-east-2
). If specific region is required, contact support.
{{% note %}} New trial license could be generated on my atlassian. Use
BX02-9YO1-IN86-LO5G
Server ID for generation. {{% /note %}} -
From local terminal (Git Bash for Windows users) start the installation:
docker run --pull=always --env-file aws_envs \ -v "/$PWD/dcapt.tfvars:/data-center-terraform/conf.tfvars" \ -v "/$PWD/dcapt-snapshots.json:/data-center-terraform/dcapt-snapshots.json" \ -v "/$PWD/logs:/data-center-terraform/logs" \ -it atlassianlabs/terraform:2.9.3 ./install.sh -c conf.tfvars
-
Copy product URL from the console output. Product url should look like
http://a1234-54321.us-east-2.elb.amazonaws.com/bamboo
. -
Wait for all remote agents to be started and connected. It can take up to 10 minutes. Agents can be checked in
Settings
>Agents
.
{{% note %}}
All the datasets use the standard admin
/admin
credentials.
{{% /note %}}
Data dimensions and values for default enterprise-scale dataset uploaded are listed and described in the following table.
Data dimensions | Value for an enterprise-scale dataset |
---|---|
Users | 2000 |
Projects | 100 |
Plans | 2000 |
Remote agents | 50 |
{{% note %}} You are responsible for the cost of the AWS services running during the reference deployment. For more information, go to aws.amazon.com/pricing. {{% /note %}}
To reduce costs, we recommend you to keep your deployment up and running only during the performance runs.
Data Center App Performance Toolkit has its own set of default test actions:
- JMeter: for load at scale generation
- Selenium: for UI timings measuring
- Locust: for defined parallel Bamboo plans execution
App-specific action - action (performance test) you have to develop to cover main use cases of your application. Performance test should focus on the common usage of your application and not to cover all possible functionality of your app. For example, application setup screen or other one-time use cases are out of scope of performance testing.
If your app introduces new functionality for Bamboo entities, for example new task, it is important to extend base dataset with your app specific functionality.
-
Follow installation instructions described in bamboo dataset generator README.md
-
Open
app/util/bamboo/bamboo_dataset_generator/src/main/java/bamboogenerator/Main.java
and set:BAMBOO_SERVER_URL
: url of Bamboo stackADMIN_USER_NAME
: username of admin user (default isadmin
)
-
Login as
ADMIN_USER_NAME
, go to Profile > Personal access tokens and create a new token with the same permissions as admin user. -
Run following command:
export BAMBOO_TOKEN=newly_generarted_token # for MacOS and Linux
or
set BAMBOO_TOKEN=newly_generarted_token # for Windows
-
Open
app/util/bamboo/bamboo_dataset_generator/src/main/java/bamboogenerator/service/generator/plan/PlanGenerator.java
file and modify plan template according to your app. e.g. add new task. -
Navigate to
app/util/bamboo/bamboo_dataset_generator
and start generation:./run.sh # for MacOS and Linux
or
run # for Windows
-
Login into Bamboo UI and make sure that plan configurations were updated.
-
Default duration of the plan is 60 seconds. Measure plan duration with new app-specific functionality and modify
default_dataset_plan_duration
value accordingly inbamboo.yml
file.For example, if plan duration with app-specific task became 70 seconds, than
default_dataset_plan_duration
should be set to 70 seconds inbamboo.yml
file.
For example, you develop an app that adds some additional UI elements to view plan summary page. In this case, you should develop Selenium app-specific action:
-
Extend example of app-specific action in
dc-app-performance-toolkit/app/extension/bamboo/extension_ui.py
.
Code example. So, our test has to open plan summary page and measure time to load of this new app-specific element on the page. -
If you need to run
app_specific_action
as specific user uncommentapp_specific_user_login
function in code example. Note, that in this casetest_1_selenium_custom_action
should follow just beforetest_2_selenium_z_log_out
action. -
In
dc-app-performance-toolkit/app/selenium_ui/bamboo_ui.py
, review and uncomment the following block of code to make newly created app-specific actions executed:# def test_1_selenium_custom_action(webdriver, datasets, screen_shots): # app_specific_action(webdriver, datasets)
-
Run toolkit with
bzt bamboo.yml
command to ensure that all Selenium actions includingapp_specific_action
are successful.
-
Check that
bamboo.yml
file has correct settings ofapplication_hostname
,application_protocol
,application_port
,application_postfix
, etc. -
Set desired execution percentage for
standalone_extension
. Default value is0
, which means thatstandalone_extension
action will not be executed. For example, for app-specific action development you could set percentage ofstandalone_extension
to 100 and for all other actions to 0 - this way onlylogin_and_view_all_builds
andstandalone_extension
actions would be executed. -
Navigate to
dc-app-performance-toolkit/app
folder and run from virtualenv(as described indc-app-performance-toolkit/README.md
):python util/jmeter/start_jmeter_ui.py --app bamboo
-
Open
Bamboo
thread group >actions per login
and navigate tostandalone_extension
-
Review existing stabs of
jmeter_app_specific_action
:- example GET request
- example POST request
- example extraction of variables from the response -
app_id
andapp_token
- example assertions of GET and POST requests
-
Modify examples or add new controllers according to your app main use case.
-
Right-click on
View Results Tree
and enable this controller. -
Click Start button and make sure that
login_and_view_dashboard
andstandalone_extension
are executed. -
Right-click on
View Results Tree
and disable this controller. It is important to disableView Results Tree
controller before full-scale results generation. -
Click Save button.
-
To make
standalone_extension
executable during toolkit run editdc-app-performance-toolkit/app/bamboo.yml
and set execution percentage ofstandalone_extension
accordingly to your use case frequency. -
App-specific tests could be run (if needed) as a specific user. In the
standalone_extension
uncommentlogin_as_specific_user
controller. Navigate to theusername:password
config element and update values forapp_specific_username
andapp_specific_password
names with your specific user credentials. Also make sure that you located your app-specific tests betweenlogin_as_specific_user
andlogin_as_default_user_if_specific_user_was_loggedin
controllers. -
Run toolkit to ensure that all JMeter actions including
standalone_extension
are successful.
- Extend example of app-specific action in
dc-app-performance-toolkit/app/extension/bamboo/extension_locust.py
, so that test will call the endpoint with GET request, parse response use these data to call another endpoint with POST request and measure response time.
Code example. - In
dc-app-performance-toolkit/app/bamboo.yml
uncomment inexecution
sectionscenario: locust_app_specific
to enable locust app-specific test execution. - In
dc-app-performance-toolkit/app/bamboo.yml
setstandalone_extension_locust
to1
- app-specific action will be executed by every virtual user oflocust_app_specific
scenario. Default value is0
, which means thatstandalone_extension_locust
action will not be executed. - App-specific tests could be run (if needed) as a specific user. Use
@run_as_specific_user(username='specific_user_username', password='specific_user_password')
decorator for that. - Run toolkit with
bzt bamboo.yml
command to ensure that all Locust actions includinglocust_app_specific_action
are successful. Note, thatlocust_app_specific_action
action execution will start in some time full after ramp period up is finished (in 5-6 min).
Default TerraForm deployment configuration
already has a dedicated execution environment pod to run tests from. For more details see Execution Environment Settings
section in dcapt.tfvars
file.
-
Check the
bamboo.yml
configuration file. If load configuration settings were changed for dev runs, make sure parameters were changed back to the defaults:application_hostname: bamboo_host_name or public_ip # Bamboo DC hostname without protocol and port e.g. test-bamboo.atlassian.com or localhost application_protocol: http # http or https application_port: 80 # 80, 443, 8080, 8085, etc secure: True # Set False to allow insecure connections, e.g. when using self-signed SSL certificate application_postfix: /bamboo # e.g. /babmoo in case of url like http://localhost:8085/bamboo admin_login: admin admin_password: admin load_executor: jmeter concurrency: 200 # number of concurrent threads to authenticate random users test_duration: 45m ramp-up: 3m total_actions_per_hour: 2000 # number of total JMeter actions per hour number_of_agents: 50 # number of available remote agents parallel_plans_count: 40 # number of parallel plans execution start_plan_timeout: 60 # maximum timeout of plan to start default_dataset_plan_duration: 60 # expected plan execution duration
You'll need to run the toolkit for each test scenario in the next section.
4. Running the test scenarios from execution environment against enterprise-scale Bamboo Data Center
This scenario helps to identify basic performance issues.
To receive performance baseline results without an app installed and without app-specific actions (use code from master
branch):
-
Before run:
- Make sure
bamboo.yml
and toolkit code base has default configuration from themaster
branch. - Check load configuration parameters needed for enterprise-scale run: Setting up load configuration for Enterprise-scale runs.
- Check correctness of
application_hostname
,application_protocol
,application_port
andapplication_postfix
in .yml file. standalone_extension
set to 0. App-specific actions are not needed for Run1 and Run2.standalone_extension_locust
set to 0.- AWS access keys set in
./dc-app-performance-toolkit/app/util/k8s/aws_envs
file:AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
(only for temporary creds)
- Make sure
-
Navigate to
dc-app-performance-toolkit
folder and start tests execution:export ENVIRONMENT_NAME=your_environment_name
docker run --pull=always --env-file ./app/util/k8s/aws_envs \ -e REGION=us-east-2 \ -e ENVIRONMENT_NAME=$ENVIRONMENT_NAME \ -v "/$PWD:/data-center-terraform/dc-app-performance-toolkit" \ -v "/$PWD/app/util/k8s/bzt_on_pod.sh:/data-center-terraform/bzt_on_pod.sh" \ -it atlassianlabs/terraform:2.9.3 bash bzt_on_pod.sh bamboo.yml
-
View the following main results of the run in the
dc-app-performance-toolkit/app/results/bamboo/YY-MM-DD-hh-mm-ss
folder:results_summary.log
: detailed run summaryresults.csv
: aggregated .csv file with all actions and timingsbzt.log
: logs of the Taurus tool executionjmeter.*
: logs of the JMeter tool executionlocust.*
: logs of the Locust tool execution
{{% note %}}
Review results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to
the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
{{% /note %}}
To receive performance results with an app installed (still use master branch):
-
Install the app you want to test.
-
Setup app license.
-
Navigate to
dc-app-performance-toolkit
folder and start tests execution:export ENVIRONMENT_NAME=your_environment_name
docker run --pull=always --env-file ./app/util/k8s/aws_envs \ -e REGION=us-east-2 \ -e ENVIRONMENT_NAME=$ENVIRONMENT_NAME \ -v "/$PWD:/data-center-terraform/dc-app-performance-toolkit" \ -v "/$PWD/app/util/k8s/bzt_on_pod.sh:/data-center-terraform/bzt_on_pod.sh" \ -it atlassianlabs/terraform:2.9.3 bash bzt_on_pod.sh bamboo.yml
{{% note %}}
Review results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to
the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
{{% /note %}}
To receive results for Bamboo DC with app and with app-specific actions:
-
Before run:
- Make sure
bamboo.yml
and toolkit code base has code base with your developed app-specific actions. - Check correctness of
application_hostname
,application_protocol
,application_port
andapplication_postfix
in .yml file. - Check load configuration parameters needed for enterprise-scale run: Setting up load configuration for Enterprise-scale runs.
standalone_extension
set to non 0 and .jmx file has standalone actions implementation in case of JMeter app-specific actions.standalone_extension_locust
set to 1 and Locust app-specific actions code base applied in case of Locust app-specific actions.- test_1_selenium_custom_action is uncommented and has implementation in case of Selenium app-specific actions.
- AWS access keys set in
./dc-app-performance-toolkit/app/util/k8s/aws_envs
file:AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
(only for temporary creds)
- Make sure
-
Navigate to
dc-app-performance-toolkit
folder and start tests execution:export ENVIRONMENT_NAME=your_environment_name
docker run --pull=always --env-file ./app/util/k8s/aws_envs \ -e REGION=us-east-2 \ -e ENVIRONMENT_NAME=$ENVIRONMENT_NAME \ -v "/$PWD:/data-center-terraform/dc-app-performance-toolkit" \ -v "/$PWD/app/util/k8s/bzt_on_pod.sh:/data-center-terraform/bzt_on_pod.sh" \ -it atlassianlabs/terraform:2.9.3 bash bzt_on_pod.sh bamboo.yml
{{% note %}}
Review results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to
the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
{{% /note %}}
To generate a performance regression report:
- Edit the
./app/reports_generation/bamboo_profile.yml
file:- Under
runName: "without app"
, in therelativePath
key, insert the relative path to results directory of Run 1. - Under
runName: "with app"
, in therelativePath
key, insert the relative path to results directory of Run 2. - Under
runName: "with app and app-specific actions"
, in therelativePath
key, insert the relative path to results directory of Run 3.
- Under
- Navigate locally to
dc-app-performance-toolkit
folder and run the following command from local terminal (Git Bash for Windows users) to generate reports:docker run --pull=always \ -v "/$PWD:/dc-app-performance-toolkit" \ --workdir="//dc-app-performance-toolkit/app/reports_generation" \ --entrypoint="python" \ -it atlassian/dcapt csv_chart_generator.py bamboo_profile.yml
- In the
./app/results/reports/YY-MM-DD-hh-mm-ss
folder, view the.csv
file (with consolidated scenario results), the.png
chart file and performance scenario summary report. If you see an impact (>20%) on any action timing, we recommend taking a look into the app implementation to understand the root cause of this delta.
{{% warning %}} It is recommended to terminate an enterprise-scale environment after completing all tests. Follow Terminate enterprise-scale environment instructions. In case of any problems with uninstall use Force terminate command. {{% /warning %}}
{{% warning %}} Do not forget to attach performance testing results to your ECOHELP ticket. {{% /warning %}}
- Make sure you have report folder with bamboo performance scenario results.
Folder should have
profile.csv
,profile.png
,profile_summary.log
and profile run result archives. Archives should contain all raw data created during the run:bzt.log
, selenium/jmeter/locust logs, .csv and .yml files, etc. - Attach report folder to your ECOHELP ticket.
If the installation script fails on installing Helm release or any other reason, collect the logs, zip and share to community Slack #data-center-app-performance-toolkit channel. For instructions on how to collect detailed logs, see Collect detailed k8s logs. For failed cluster uninstall use Force terminate command.
In case of any technical questions or issues with DC Apps Performance Toolkit, contact us for support in the community Slack #data-center-app-performance-toolkit channel.