@@ -33,6 +33,9 @@ Constraints:
33
33
- Users should be able to pull the Docker image from a public Docker repository.
34
34
- The tool must include a Docker Compose file.
35
35
36
+ ## Architecture
37
+ ![ Architecture of the Benchmarking Tool] ( images/architecture.png )
38
+
36
39
## Usage
37
40
38
41
We provide the tool as a Docker image since we primarily intend to benchmark
@@ -133,6 +136,41 @@ that:
133
136
docker run --rm -it -v /my-conf.ini:$( pwd) /my-conf.ini ghcr.io/ls1intum/storage-benchmarking run -d /tmp -c /my-conf.ini
134
137
```
135
138
139
+
140
+ ### Worker Coordinator Cluster
141
+ For automatic distributed benchmarking over time we offer the setup of a worker
142
+ coordinator cluster. In this setup we have a coordinator node that distributes
143
+ tasks through a Redis Broker to a set of Worker Nodes.
144
+
145
+ The worker coordinator deployment is shown here:
146
+
147
+ ![ Deployment of the Worker Coordinator Cluster] ( images/deployment.png )
148
+
149
+ Every worker boots with a hostname (or the default hostname) which must be
150
+ unique and a group which is how workers are scheduled by the coordinator.
151
+
152
+ When a worker boots, it registers itself to a Group of workers at the Redis
153
+ instance and opens a queue to wait for jobs. It processes the jobs it receives
154
+ sequentially and de-registers itself before shutting down.
155
+
156
+ ![ Communication of a Worker Coordinator Cluster] ( images/sequence.png )
157
+
158
+ The coordinator makes sure that only one group is actively running a benchmark.
159
+ This is important if you try to measure different levels of abstraction for
160
+ example raw disk performance, zfs performance and zvol performance in a virtual
161
+ machine and want to make sure that your benchmarks don't influence one another.
162
+
163
+ The coordinator can do a few different scheduling techniques. First you have to
164
+ define groups using ` --groups group1 group2 ... ` which will be benchmarked in that
165
+ order. By default, every node in the group will start a benchmark but if you
166
+ only want a single random one to be picked in every iteration you can use the
167
+ ` --random ` flag. You can also trigger a single benchmark directly by using the
168
+ ` --trigger ` tag. If you don't want to schedule by the default time (every 2
169
+ hours) you can use ` --quick ` which will directly start the next benchmarking
170
+ round after the last group finished running the benchmarks, you can optionally
171
+ limit the maximum number of runs from the quick run using ` --limit <int> ` after
172
+ which the coordinator will exit.
173
+
136
174
## Installation
137
175
To run the project locally clone it first:
138
176
``` sh
0 commit comments