Important Note for Artifact Reviewers: Please use the
artifact-evaluation
branch
TrustMesh is a blockchain-enabled distributed computing framework designed for trustless heterogeneous IoT environments. It implements a novel three-layer architecture combining permissioned blockchain technology with a multi-phase Practical Byzantine Fault Tolerance (PBFT) consensus based protocol for scheduling decisions.
-
Three-Layer Architecture:
- Network Management Layer for system setup and application management
- Computation Layer with blockchain-based consensus
- Perception Layer for IoT device integration
-
Byzantine Fault Tolerance: Maintains security and consensus while supporting non-deterministic scheduling algorithms
-
Heterogeneous Environment Support: Designed to work across varied IoT devices and computing resources
-
Secure Workflow Management: Implements complex workflow pipelines with dependency management
-
Efficient Resource Allocation: Uses Least-Connected Dynamic Weighted Round Robin (LCDWRR) scheduling
The framework has been extensively tested on Ubuntu machines but should support macOS and Windows. For the development environment, you'll need:
- Docker
- K3S Kubernetes Cluster (Instructions to Set-up provided below)
- Python 3.8
Based on our experimental evaluation:
- Computation Nodes: Minimum 1 CPU core, 4GB RAM
- Control Nodes: Minimum 1 CPU core, 8GB RAM | Recommended 8 CPU cores, 32GB RAM
- IoT Nodes: Minimum 1 CPU core, 4GB RAM
First, clone the TrustMesh repository and navigate to the project directory:
git clone https://github.com/murtazahr/TrustMesh.git
cd trustmesh
If you don't have a K3S cluster already configured:
- Navigate to the cluster setup guide:
cd k3s-cluster-setup-guide
- Follow the detailed instructions in
K3S_CLUSTER_SETUP.md
for:- Cluster initialization
- Node configuration
- Network setup
- Required naming conventions
Once your K3S cluster is ready, follow these steps:
This step is required for first-time setup or when code changes are made.
Prerequisites:
- Update
DOCKER_USERNAME
inbuild-project.sh
with your DockerHub username - Login to Docker CLI using
docker login
- Review and customize applications to be deployed to the network in
build-project.sh
Run the build script:
chmod +x build-project.sh && ./build-project.sh
Note: The script builds and compresses all application images for network deployment. Remove the cold-chain boilerplate applications if you don't need them.
Prerequisites:
- Update the image urls for the containers based on your
DOCKER_USERNAME
After successful image building, deploy the network:
chmod +x build-and-deploy-network.sh && ./build-and-deploy-network.sh
Important: The deployment script will prompt for several configuration options. Have your network configuration details ready.
After deployment, verify your installation:
kubectl get pods
Important: All the pods should be in Running state. This may take a couple of minutes.
For troubleshooting and additional configuration options, please contact the authors.
To delete all network components and clean-up the environment:
chmod +x clean-k8s-environment.sh && ./clean-k8s-environment.sh
This section covers how to deploy applications to the network using the application deployment client.
-
Connect to the deployment client:
kubectl exec -it network-management-console-xxxxx -c application-deployment-client bash
-
Deploy the application:
python docker_image_client.py deploy_image application.tar app_requirements.json
Important: Upon successful deployment, an application ID will be generated. Save this ID as it's required for workflow creation.
The app_requirements.json
file specifies resource requirements for your application:
{
"memory": 110557184,
"cpu": 0.1,
"disk": 632085504
}
- The
application.tar
file is generated automatically when runningbuild-project.sh
- For a complete list of supported commands, run:
python docker_image_client.py
If you encounter deployment issues:
- Verify your resource requirements are within cluster limits
- Ensure the application image was built successfully
- Check the deployment client logs for detailed error messages
This section covers workflow creation using the workflow creation client.
-
Connect to the workflow creation client:
kubectl exec -it network-management-console-xxxxx -c workflow-creation-client bash
-
Create the workflow:
python workflow_creation_client.py graph.json
Important: Upon successful workflow creation, a workflow ID will be generated. Save this ID as it's required to associate data processing requests to the correct workflow.
The graph.json
file specifies the workflow as a DAG where each node is an application and edges represent the flow of data or order of execution:
{
"start": "7e5131e3-87ca-4eeb-a793-b581fe8e7916",
"nodes": {
"7e5131e3-87ca-4eeb-a793-b581fe8e7916": {"next": ["c2bcbd14-f998-4b72-9069-5f9483a459ba"]},
"c2bcbd14-f998-4b72-9069-5f9483a459ba": {"next": ["b0244328-98b2-42b7-8974-40a73433ba4b"]},
"b0244328-98b2-42b7-8974-40a73433ba4b": {"next": []}
}
}
If you encounter workflow creation issues:
- Verify all your application IDs are valid
- Verify your graph.json contains a valid DAG with no cyclic dependencies
- Check the workflow creation client logs for detailed error messages
This section provides the steps to initiate data processing requests for the sample cold-chain application.
-
Connect to the iot-node:
kubectl exec -it iot-x-xxxxx -c iot-node bash
-
Create the workflow:
python cold-chain-data-simulation.py <workflowID>
Note: You may follow a similar pattern to define your custom iot-node processes.
TrustMesh implements a three-layer architecture:
-
Network Management Layer
- Manages application deployment and workflow orchestration
- Implements clients for application deployment and workflow management
- Maintains local registry for container images
-
Computation Layer
- Implements blockchain nodes and consensus mechanism
- Manages data storage and processing
- Handles task scheduling and execution
-
Perception Layer
- Interfaces with IoT devices
- Handles data collection and result delivery
- Implements zero-process baseline approach
Based on our experimental evaluation with a 21-node testbed and the cold-chain application:
- Request Round Trip (RRT) time: 33.54 - 36.34 seconds
- Framework Overhead: 3.25 - 4.19 seconds
- Scales effectively up to 16 computation nodes
Note: Please keep in mind that we used several versions of the cold-chain anomaly detection application with varying computational requirements to attain these results. Only one of those variations is included in this project so your RRT may vary. However, you should find that the FO remains the same.
This project is licensed under the GNU General Public License - see the LICENSE file for details.
If you use TrustMesh in your research, please cite:
@misc{rangwala2024trustmeshblockchainenabledtrusteddistributed,
title={TrustMesh: A Blockchain-Enabled Trusted Distributed Computing Framework for Open Heterogeneous IoT Environments},
author={Murtaza Rangwala and Rajkumar Buyya},
year={2024},
eprint={2411.13039},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2411.13039},
}
Cloud Computing and Distributed Systems (CLOUDS) Laboratory
School of Computing and Information Systems
The University of Melbourne, Australia