-
Notifications
You must be signed in to change notification settings - Fork 1
Demo example ver 1.5.4 main scenario
This demo scenario demonstrates the basic functions of th2 and contains examples and principles of working with th2 components. This scenario includes examples of linear automation, reconciliation, test system emulation.
- th2-infra ver-1.5.4 is installed and configured.
- JDK 11, Gradle, IntelliJ IDEA is recommended. (Details: sim, read-csv, read-log.)
- Python 3.7+ (Details: packages.)
-
Create a new branch(name of this branch must comply RFC 1123) in your th2 schema git repository.
-
Transfer content of 1.5.4 schema example branch into your branch.
-
Set k8s-propagation in infra-mgr-config.yml to rule or sync.
After changing the k8s-propagation, the infra-mgr and infra-operator will create a new namespace in your kubernetes cluster. The name of this namespace will match the name of the git branch with a prefix "th2-" (by default). You can observe this process in the infra-mgr and infra-operator logs - their pods work in the namespace "service".
You can also observe this using kubectl:
-
.yml files from the git branch are converted to Kubernetes Custom Resources. You can use
kubectl get crd -n <namespace-name>
to get all kinds of custom resouses. (In further examples, the namespace indication will be omitted:kubectl config set-context --current --<namespace-name>
).And e.g.
kubectl get th2boxes
to get all CR with kind th2-box.
-
Then Custom Resourses are converted to HelmReleases: try
kubectl get hr
to get all HelmReleases.HelmReleases are deployed in the cluster as several objects.
-
Services describe networking:
e.g. the act-fix in the example above can be accessed by grpc from outside the cluster using port 31178.
If all pods in the Running status then your environment is ready to use.
Useful links at this step:
- Kubernetes dashboard:
http://hostname:30000/dashboard/
- RabbitMQ web-gui:
http://hostname:30000/rabbitmq/
- Grafana - log and monitoring solution:
http://hostname:30000/grafana/
- infra-editor - th2 tool to edit schemas:
http://hostname:30000/editor/
- th2-report-viewer - gui for obtaining the results of th2 work that are saved in the database:
http://hostname:30000/namespace-name/
If you have difficulty accessing these interfaces, check your ingress rules.
Applications:
- Simulator: https://github.com/th2-net/th2-sim-template/tree/demo-ver-1.5.4-local
- Read-log: https://github.com/th2-net/th2-read-log/tree/demo-ver-1.5.4-local
- Read-csv: https://github.com/th2-net/th2-read-csv/tree/demo-ver-1.5.4-local
Clone content of these branches and launch applications as external boxes(IntelliJ IDEA is recommended): instructions
When you run these applications, they use kubectl to get their configuration from the cluster configmaps and connect to the queues in the RabbitMQ. The data sent by the applications in this way will be saved in the database and processed by the th2 boxes.

Script: https://github.com/th2-net/th2-demo-script/tree/ver-1.5.4-main_scenario
The th2-script is a code, which contains a set of requests to the th2 components, executed in turns. The script can be written in any language that supports a common library.
The script is also a kind of external box, but at the moment python CommonFactory requires manual configuration. Follow the instructions and run the script.
The script contain 6 cases with same scenario.
The first 3 cases are successful. In the fourth case, during a trade (an event that occurs after the fifth step of the scenario), an extra execution report is generated. In the fifth case, a execution report with incorrect values is generated during a trade. And in the sixth case, the script is interrupted because there is no instrument in the "system" and we get a BusinessMessageReject.
The demo script represents linear test automation example.
What happens after running the script:
After running, the script sends events to estore to to structure the data that will be generated as a result of the work. Then sends a request to send a message to the act-fix. The act-fix redirects this message to the codec-fix encoder, from where, after encoding, it is sent to the conn(demo-conn1 or demo-conn2), saved to the base in raw format and sent using the fix protocol. The fix server(fix-demo-server1 or fix-demo-server2 respectively) (the part representing the remote system) receives the message and redirects it to the codec-sim-fix decoder. The codec-sim-fix decodes the message and sends it to the sim-demo (you have it launched locally). The sim-demo generates responses to this message and also saves the information to files. The responses from the sim-demo are sent to the codec-sim-fix for encoding, and then to the fix servers(fix-demo-server1 and dc-demo-server1 or fix-demo-server2 and dc-demo-server2 respectively) for sending. Conns(demo-conn1 and demo-dc1 or demo-conn2 and demo-dc2 respectively) accept responses, save them to the database and send them to the codec-fix for decoding. Decoded messages from the codec-fix go to the act-fix to return to the script - in case we need to reuse the field, they enter the check1 to check the message on demand, and go to the recon for passive verification and reconciliation.
Having received information from the act-fix that the message was sent, the script* sends a request to the check1. The request contains a mask for checking a message or messages, indicating the key and non-key fields, indicating the fragment of the message flow where the required message should be located, as well as the expected order and the absence of unnecessary messages(more about CheckSequenceRule). check1 checks asynchronously, so the script switches back to sending messages according to the [scenario].
The check2-recon is a component for passive verification and reconciliation. For example, if we receive the same data for different protocols, we may not describe each message for each protocol in the script. We can check only one protocol in the script, and check all the rest with the help of the Recon rules. Recon also works constantly and will perform checks regardless of whether the script is running.
The demo scenario for the recon consists of five rules:
- Match_ExecutionReports_with_dropcopy - according to the scenario, we have 2 firms, each firm has one regular fix connection, and one drop-copy connection. In this rule, we check that the same messages are received on both connections(demo-conn1/demo-dc1; demo-conn2/demo-dc2).
- Match_trades_by_TrdMatchID - in this rule, we check Execution Reports with the same TrdMatchID. That they are matched by field and that each report has its own pair.
- Match_Orders_between_the_system_logs_and_FIX - in this rule, we compare the orders that were generated by the script, and the log lines of the remote system in which these orders were written.
- Match_Orders_between_fix_and_csv_file - in this rule, we compare orders generated by the script and the lines in the csv file that were generated by the remote system when receiving orders.
- Match_ExecutionReports_between_fix_and_csv_file - in this rule, we compare the Execution Reports received from the remote system by FIX and the lines in the csv file that were generated by the remote system when processing orders.
The recon-demo is represents rule-based checking example.
Report viewer UI:
All raw messages and test events are stored in the centralized data lake and then shown in a web-based GUI, which helps to manage and analyze test data. The GUI is divided into two parts: Events and Messages.
On the Event tab you can see the tree of all executed test cases (act events, check1 events, check2-recon events, conn events). Events may contain different data: information about sending messages, incoming messages, comparison tables, where you can check that expected and actual results match or don’t match.
The Message tab is the list of outcoming and incoming messages. It is linked with Events. When you choose an event with a message, this message is displayed on the list in the Message tab. Also you can navigate through the message list without reference to any event for extra analysis.
Events and messages are stored in database without time limits, so you can return to your test scenarios anytime.
How it works:
When we request information in the rpt-data-viewer - the rpt-data-provider requests this information in the database. If this information contains messages, then they are sent to the appropriate codec for decoding. Based on the events and decoded messages, the hierarchy that we see is built in the viewer.

Get in touch with us to learn more about th2 mail to: [email protected]
- Architecture
- Tutorials