Skip to content

Automatic testing on EC2

amnonh edited this page Nov 20, 2014 · 10 revisions

Both the testing framework and this page are under-work and both would be updated

EC2 Automatic Testing overview

The goal of the automatic testing is to allow with minimal effort to run tests and collect their results.

The testing process compose of running multiple instances, with one or more system under test (SUT) and one or more tester instances.

All instances should be run, installed, configure and run their app. The tester output should be collected and parsed to a meaningful results.

Running the Tests

This section would be changed completely and describe the tests during the test development phase. Start a new development instance on EC2. Make sure that the EC2 utils working on it, a good candidate would be a development server with its AWS keys set. Enter the osv directory: cd osv

git pull
cd apps

Currently we are working with shlomi's git

git remote add shlomi [email protected]:slivne/osv-apps.git
git fetch shlomi
git checkout shlomi/memc_test -b memc_test

The following scripts are changes continuously, make sure to perform git pull and cp

cp ec2-tester.sh ../scripts/
cp ec2-utils.sh ../scripts/
cp memaslap2json.py ../scripts/
cp statjson.py ../scripts/
cp tester.py ../scripts/

Before running memcached's test, install it's testing client

cd memcached/tests/
sudo ./install.sh 
cd ../../..

From the osv root directory run the test:

./scripts/ec2-tester.sh --sut-os linux --ami ami-a8d369c0 apps//memcached/tests/test_1_memaslap/tester/ 

Analyze the results

apps/memaslap2json.py --delimiter '>>>>>>> end <<<<<<<'  apps/memcached/tests/test_1_memaslap/tester/step_1_run.sh.stdout_stderr > memc.out
apps/statjson.py memc.out

Testing framework

The testing framework is a set of scripts and configuration files. Currently (and when/if it would be change so would this page) there is one tester and one SUT, where the tester is started and configure manually.

Scripts

EC2 Management

ec2-tester.sh

The current main entry point. It runs on the tester machine and does the following:

  • Starts the SUT instance and propagate its user_data (see user data in the configuration section)
  • Runs the tester script (see tester)
  • Shutdown the SUT instance

Open question:

  • should it upload the raw/parsed data to s3
  • should it run the data parsing scripts

tester.py

ec2-utils.sh

A collection of ec2 commands

Installation and running

install.sh

On the tester machine a script that is run manually before the first test starts and install and configure the tester applications.

tester.py

A general utility to run tests. Given a directory the tester.py perform the following tasks:

  • Calculate the current run configuration based on configuration files and command line parameters (ie. SUT ip address)
  • Create executable scripts from templates (see configuration and templates)
  • Run the created scripts according to their steps number

The tester.py is also used as a utility program to retrieve a configuration value from the above combination of configuration files and command line parameters.

Statistic gathering

The results of running each test step are gathered in an output file. To get a relevant results processing the data is done in two steps. First, each test run results is parsed to create a jason object. Then the multiple objects are combine to get the average results.

Output specific memaslap2json.py

The memaslap2json.py scripts takes the memaslap output and creates a json objects from it. For example, the following is an output for 3 runs

servers : 10.71.176.31:11211
threads count: 3
concurrency: 120
run time: 60s
windows size: 10k
set proportion: set_prop=0.10
get proportion: get_prop=0.90
cmd_get: 1566806
cmd_set: 174152
get_misses: 485313
written_bytes: 302091490
read_bytes: 1204159239
object_bytes: 189477376

Run time: 60.0s Ops: 1740958 TPS: 28996 Net_rate: 23.9M/s
>>>>>>> end <<<<<<<
servers : 10.71.176.31:11211
threads count: 3
concurrency: 120
run time: 60s
windows size: 10k
set proportion: set_prop=0.10
get proportion: get_prop=0.90
cmd_get: 1569074
cmd_set: 174405
get_misses: 486460
written_bytes: 302526710
read_bytes: 1205414614
object_bytes: 189752640

Run time: 60.0s Ops: 1743479 TPS: 29043 Net_rate: 24.0M/s
>>>>>>> end <<<<<<<
servers : 10.71.176.31:11211
threads count: 3
concurrency: 120
run time: 60s
windows size: 10k
set proportion: set_prop=0.10
get proportion: get_prop=0.90
cmd_get: 1560272
cmd_set: 173420
get_misses: 479897
written_bytes: 300828355
read_bytes: 1202882017
object_bytes: 188680960

Run time: 60.0s Ops: 1733692 TPS: 28884 Net_rate: 23.9M/s
>>>>>>> end <<<<<<<

The results would be:

[
{
"servers": "10.71.176.31:11211",
"threads count": 3,
"concurrency": 120,
"run time": 60,
"windows size": 10000,
"set proportion": 0.10,
"get proportion": 0.90,
"cmd_get": 1566806,
"cmd_set": 174152,
"get_misses": 485313,
"written_bytes": 302091490,
"read_bytes": 1204159239,
"object_bytes": 189477376,
"Run time": 60.0,
"Ops": 1740958,
"TPS": 28996,
"Net_rate": 23.9,
"file": "apps/memcached/tests/test_1_memaslap/tester/step_1_run.sh.stdout_stderr"
}
,
{
"servers": "10.71.176.31:11211",
"threads count": 3,
"concurrency": 120,
"run time": 60,
"windows size": 10000,
"set proportion": 0.10,
"get proportion": 0.90,
"cmd_get": 1569074,
"cmd_set": 174405,
"get_misses": 486460,
"written_bytes": 302526710,
"read_bytes": 1205414614,
"object_bytes": 189752640,
"Run time": 60.0,
"Ops": 1743479,
"TPS": 29043,
"Net_rate": 24.0,
"file": "apps/memcached/tests/test_1_memaslap/tester/step_1_run.sh.stdout_stderr"
}
,
{
"servers": "10.71.176.31:11211",
"threads count": 3,
"concurrency": 120,
"run time": 60,
"windows size": 10000,
"set proportion": 0.10,
"get proportion": 0.90,
"cmd_get": 1560272,
"cmd_set": 173420,
"get_misses": 479897,
"written_bytes": 300828355,
"read_bytes": 1202882017,
"object_bytes": 188680960,
"Run time": 60.0,
"Ops": 1733692,
"TPS": 28884,
"Net_rate": 23.9,
"file": "apps/memcached/tests/test_1_memaslap/tester/step_1_run.sh.stdout_stderr"
}
]

Calculating average - statjson.py

Use the statjson.py script to average multiple results, the previous results would be combine to:

{
"run time": 60.0,
"threads count": 3.0,
"get proportion": 0.9,
"concurrency": 120.0,
"windows size": 10000.0,
"Ops": 1739376.33333,
"set proportion": 0.1,
"cmd_get": 1565384.0,
"object_bytes": 189303658.667,
"Run time": 60.0,
"read_bytes": 1204151956.67,
"TPS": 28974.3333333,
"cmd_set": 173992.333333,
"get_misses": 483890.0,
"written_bytes": 301815518.333,
"Net_rate": 23.9333333333,
"total": 3
}
Clone this wiki locally