Skip to content

Byzantine Fault Tolerant CRDT's and other Eventually Consistent Algorithms

Notifications You must be signed in to change notification settings

davidrusu/bft-crdts

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Byzantine Fault Tolerant Eventually Consistent Algorithms

This research was funded by Maidsafe and open sourced at their request. (MIT/BSD-3)

The repository contains a byzantine fault tolerant systems for achieving network agreement over eventually consistent algorithms.

The system is implemented as a three layered onion. The center is the eventually consistent algorithm, e.g. CRDT's or perhaps a causal asset transfer system (AT2). This core is responsible for validating and applying incoming operations, this is the "business logic" of the system.

On top of the core we layer an implementation of SecureBroadcast. SecureBroadcast provides our "double spend" protection. It allows us to ensure that we will never have a conflicting operation accepted by the network.

At the outer most layer, we have our network implementation, that is, the layer that actually does the the dirty work of interacting with the real world. Sending, receiving packets, ensuring reliable packet delivery, etc.

Running the Test Nets

We make use of quickcheck for generating a large range of network behavior and ensuring that our correctness properties hold.

To run the test nets, run:

# install rust + cargo (see https://rustup.rs/)

QUICKCHECK_TESTS=1000 cargo test

Generating Message Sequence Charts

Some tests will generate <test_name>.msc files after a run to help understand what is going on inside the network

You can process these .msc files with mscgen. Get yourself a copy of the binary, then run

mscgen -T png -i onboarding.msc -o out.png

Replace onboarding.msc with the test you'd like to study.

Repository Structure

bft-crdts/src/
├── bank                              # Specialization of the network to a distributed bank
│   ├── bft_bank_net.rs
│   ├── bft_bank.rs
│   └── mod.rs
├── orswot                            # Specialization of the network to an ORSWOT
│   ├── bft_orswot_net.rs
│   ├── bft_orswot.rs
│   └── mod.rs
├── actor.rs                          # the actor is the public identity of a process
├── deterministic_secure_broadcast.rs # implementation of Secure Broadcast
├── lib.rs
├── net.rs                            # The in-memory network layer implementation
├── traits.rs                         # The SecureBroadcastAlgorithm trait
└── at2_impl.r                        # An implementation of AT2 as described in the paper

Onion Layers

The Network Layer

The network layer implemented in this repository is a simulated, in-memory network with an assumption that packets are delivered reliably (no packet drops) and in any order (not necessarily the order they were produced).

In a production deployment, the network layer would need to be adapted to real world network constraints and all the complexity that entails.

Secure Broadcast

Secure broadcast is similar in operation to a 2-phase-commit. It differs in that the underlying algorithm decides the level of parallelism. The only constraint directly imposed by secure broadcast is that operations produced by an actor is processed in the order that operations are created by the actor (source ordering). Below is a sequence diagram of the secure broadcast algorithm, process states as they evolve are on the left.

Secure Broadcast Sequence Diagram

Eventually Consistent Algorithm

At the core, we have the algorithm we are performing this byzantine fault tolerance over.

The contract the algorithm must fulfill is given by this trait here:

pub trait SecureBroadcastAlgorithm: Clone + Debug + Eq {
    type Op: Debug + Clone + Hash + Eq + Serialize; // The set of ops this algorithm accepts
    type ReplicatedState: Clone + Debug + Eq;  // The snapshot state that is used to onboard new peers

    /// initialize a new replica of this algorithm
    fn new(actor: Actor) -> Self;

	/// Take a snapshot of this algorithms state
    fn state(&self) -> Self::ReplicatedState;

    /// Called when onboarding a new replica of this algorithm
    fn sync_from(&mut self, other: Self::ReplicatedState);

    /// Validate any incoming operations, here you must perform your byzantine fault
	/// tolerance checks specific to your algorithm
    fn validate(&self, from: &Actor, op: &Self::Op) -> bool;

    /// Execute an operation after the network has validated it
    fn apply(&mut self, op: Self::Op);
}

Onion Layers

References + Prior Art

About

Byzantine Fault Tolerant CRDT's and other Eventually Consistent Algorithms

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published