diff --git a/README.rst b/README.rst index 3be48ac7..f0753fe4 100644 --- a/README.rst +++ b/README.rst @@ -13,3 +13,61 @@ Fire up your terminal, and:: # Install using PyPi pip install benchpress --user + # Make the Benchpress binaries available + export PATH=$PATH:$HOME/.local/bin + +Specify what to benchmark by implementing a Python script that generates commands:: + + import benchpress as bp + from benchpress.suite_util import BP_ROOT + + scripts = [ + ('X-ray', 'xraysim', ["10*10*1", "20*10*1"]), + ('Bean', 'galton_bean_machine', ["10000*10", "20000*10"]), + ] + + cmd_list = [] + for label, name, sizes in scripts: + for size in sizes: + full_label = "%s/%s" % (label, size) + bash_cmd = "python {root}/benchmarks/{script}/python_numpy/{script}.py --size={size}" \ + .format(root=BP_ROOT, script=name, size=size) + cmd_list.append(bp.command(bash_cmd, full_label)) + + # Finally, we build the Benchpress suite, which is written to `--output` + bp.create_suite(cmd_list) + + +And run the script:: + + $ python suites/simple_example.py -o my_benchmark.json + Scheduling 'X-ray/10*10*1': 'python xraysim/python_numpy/xraysim.py --size=10*10*1' + Scheduling 'X-ray/20*10*1': 'python xraysim/python_numpy/xraysim.py --size=20*10*1' + Scheduling 'Bean/10000*10': 'python galton_bean_machine/python_numpy/galton_bean_machine.py --size=10000*10' + Scheduling 'Bean/20000*10': 'python galton_bean_machine/python_numpy/galton_bean_machine.py --size=20000*10' + Writing suite file: my_benchmark.json + +The result is a JSON file `results.json` that encapsulate the commands that make up the benchmark suite. +Now, use `bp-run` to run the benchmark suite:: + + $bp-run --output results.json + Executing 'X-ray/10*10*1' + Executing 'X-ray/20*10*1' + Executing 'Bean/10000*10' + Executing 'Bean/20000*10' + +Finally, let's visualize the results in ASCII:: + + $bp-cli results.json + X-ray/10*10*1: [0.013303, 0.013324, 0.012933] 0.0132 (0.0002) + X-ray/20*10*1: [0.108884, 0.105319, 0.105392] 0.1065 (0.0017) + Bean/10000*10: [0.002653, 0.002553, 0.002616] 0.0026 (0.0000) + Bean/20000*10: [0.005149, 0.005088, 0.005271] 0.0052 (0.0001) + +Or as a bar chart:: + + $bp-chart results.json --output results.pdf + Writing file 'results.pdf' using format 'pdf'. + +.. image:: https://raw.githubusercontent.com/bh107/benchpress/master/doc/source/_static/quickstart_results.png + diff --git a/doc/source/_static/quickstart_results.png b/doc/source/_static/quickstart_results.png new file mode 100644 index 00000000..61b24fc4 Binary files /dev/null and b/doc/source/_static/quickstart_results.png differ diff --git a/doc/source/index.rst b/doc/source/index.rst index f5395dd7..c4309844 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -15,8 +15,6 @@ Contents: | quickstart | | | install | | | usage_commands | benchmarks | -| usage_examples | | -| usage_suites | | | implementing | | | reference/index | | +------------------------------+------------------------------+ diff --git a/doc/source/install.rst b/doc/source/install.rst index e86c9780..ed959036 100644 --- a/doc/source/install.rst +++ b/doc/source/install.rst @@ -14,7 +14,7 @@ The following shows how to do a user-mode / local installation:: pip install benchpress --user -Extend your ``$PATH``, such that the commands (`bp-run`, `bp-run`, `bp-cli`, `bp-chart`) are readily available:: +Extend your ``$PATH``, such that the binaries (`bp-run`, `bp-run`, `bp-cli`, `bp-chart`) are readily available:: export PATH=$PATH:$HOME/.local/bin diff --git a/doc/source/usage_examples.rst b/doc/source/usage_examples.rst deleted file mode 100644 index b01b6e5b..00000000 --- a/doc/source/usage_examples.rst +++ /dev/null @@ -1,98 +0,0 @@ -.. _usage_examples: - -================ -Usage - Examples -================ - -Make sure that you have set your ``PATH`` and ``PYTHONPATH`` correctly. Test it by invoking:: - - bp-info --all - -This should output something similar to:: - - benchmarks: /home/safl/benchpress/benchmarks - commands: /home/safl/benchpress/bin - docsrc: /home/safl/benchpress/doc - hooks: /home/safl/benchpress/hooks - mod: /home/safl/benchpress/module/benchpress - mod_parent: /home/safl/benchpress/module - suites: /home/safl/benchpress/suites - -Benchmarks can be run `manually` / `by hand` or via the `bp-run` tool. The ``bp-info`` command comes in handy when you want to find your way around. It can tell you where benchmarks and suites are located. This has multiple uses, such as:: - - # Go to the benchmark directory - cd `bp-info --benchmarks` - - # Go to the suites directory - cd `bp-info --suites` - -Or listing what is available:: - - # Show me the benchmark suites - ls `bp-info --suites` - -The ``bp-info`` command is used by the benchmark suites themselves to locate the benchmark directory. - -Running benchmarks via bp-run -============================= - -The following will run the ``python_numpy`` benchmark suite:: - - bp-run NOREPOS `bp-info --suites`/python_numpy.py --output /tmp/my_run.json - -And store the results from the run in the file ``/tmp/my_run.json``. - -Each benchmark in the suite is executed three times by default. You can change the number of executions with the ``--runs`` flag. The data collected in the output-file contains a bunch of information about the environment that the benchmark was executed in, such operating system version, hardware info, state of environment variables and other things. -You can inspect the data with your text-editor and write a parser for extracting the data your are interested in. - -Benchpress has several helper functions available, in the Python benchpress module to aid such as task. Additionally the ``bp-times`` command provides a convenient overview of elapsed time. - -Try invoking it on your output-file:: - - bp-times ``/tmp/my_run.json`` - -This should provide output similar to:: - - 1D Stencil [NumPy, N/A, N/A]: [21.174466, 15.875864, 11.602997] 16.217776 (3.915008) 3 - 2D Stencil [NumPy, N/A, N/A]: [29.273266, 29.554602, 29.318557] 29.382142 (0.123342) 3 - 3D Stencil [NumPy, N/A, N/A]: N/A - Black Scholes [NumPy, N/A, N/A]: [5.905177, 5.846048, 5.819017] 5.856747 (0.035979) 3 - Game of Life [NumPy, N/A, N/A]: [34.63458, 32.782089, 32.694652] 33.370440 (0.894594) 3 - Gauss Elimination [NumPy, N/A, N/A]: [0.182918, 0.181614, 0.18278] 0.182437 (0.000585) 3 - Heat Equation [NumPy, N/A, N/A]: [4.194531, 4.135326, 4.185207] 4.171688 (0.025992) 3 - Jacobi Solve [NumPy, N/A, N/A]: [4.155966, 4.185878, 4.180958] 4.174267 (0.013096) 3 - Jacobi Stencil [NumPy, N/A, N/A]: [3.060532, 3.006271, 3.023313] 3.030039 (0.022657) 3 - LU Factorization [NumPy, N/A, N/A]: [1.766148, 1.71995, 1.719055] 1.735051 (0.021992) 3 - Lattice Boltzmann 3D [NumPy, N/A, N/A]: [0.574888, 0.581474, 0.571298] 0.575887 (0.004214) 3 - Matrix Multiplication [NumPy, N/A, N/A]: [0.042147, 0.04208, 0.042872] 0.042366 (0.000359) 3 - Monte Carlo PI [NumPy, N/A, N/A]: [15.190297, 14.993777, 15.236259] 15.140111 (0.105161) 3 - SOR [NumPy, N/A, N/A]: N/A - Shallow Water [NumPy, N/A, N/A]: N/A - Synthetic [NumPy, N/A, N/A]: N/A - Synthetic Inplace [NumPy, N/A, N/A]: N/A - Synthetic Stream #0 Ones [NumPy, N/A, N/A]: N/A - Synthetic Stream #1 Range [NumPy, N/A, N/A]: N/A - Synthetic Stream #2 Random [NumPy, N/A, N/A]: N/A - kNN Naive 1 [NumPy, N/A, N/A]: [1.364709, 1.3518, 1.352595] 1.356368 (0.005907) 3 - nbody [NumPy, N/A, N/A]: [9.603349, 9.665165, 9.727452] 9.665322 (0.050665) 3 - -.. note:: You do not have to wait for the benchmark run to finish, results at added to the output-file as they are available. Runs that have not yet finished show up as "N/A". - -Running benchmarks "by hand" -============================ - -If you, for some reason, do not wish to run via ``bp-run``, then you can go to the just execute them manually:: - - python `bp-info --benchmarks`/heat_equation/python_numpy/heat_equation.py --size=10000*10000*10 - -The above command executes the Python/NumPy implementation of :ref:`heat_equation`. -If you would like to execute the same benchmark but using Bohrium as backend then do the following:: - - python -m bohrium `bp-info --benchmarks`/heat_equation/python_numpy/heat_equation.py --size=10000*10000*10 --bohrium=True - -.. note:: Notice the ``-m bohrium`` right after the ``python`` command, and the ``--bohrium=True`` argument at the end. Both are needed. - -The ``-m bohrium`` flag overloads the ``numpy`` module, which means you do not have to change the code to run using Bohrium. - -The ``--bohrium=True`` tells the benchpress tool that it is running with Bohrium. Bohrium uses lazy evaluation so we must instruct the benchpress tool to flush computions in order to accurate measurements. - diff --git a/doc/source/usage_suites.rst b/doc/source/usage_suites.rst deleted file mode 100644 index 27e35cbd..00000000 --- a/doc/source/usage_suites.rst +++ /dev/null @@ -1,5 +0,0 @@ -============== -Usage - Suites -============== - -... diff --git a/suites/simple_example.py b/suites/simple_example.py new file mode 100644 index 00000000..0c7c29ac --- /dev/null +++ b/suites/simple_example.py @@ -0,0 +1,18 @@ +import benchpress as bp +from benchpress.suite_util import BP_ROOT + +scripts = [ + ('X-ray', 'xraysim', ["10*10*1", "20*10*1"]), + ('Bean', 'galton_bean_machine', ["10000*10", "20000*10"]), +] + +cmd_list = [] +for label, name, sizes in scripts: + for size in sizes: + full_label = "%s/%s" % (label, size) + bash_cmd = "python {root}/benchmarks/{script}/python_numpy/{script}.py --size={size}" \ + .format(root=BP_ROOT, script=name, size=size) + cmd_list.append(bp.command(bash_cmd, full_label)) + +# Finally, we build the Benchpress suite, which is written to `--output` +bp.create_suite(cmd_list)