-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #67 from RTradeLtd/docupdate
Documentation Updates
- Loading branch information
Showing
10 changed files
with
178 additions
and
11 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,26 @@ | ||
# Benchmarks | ||
|
||
Benchmarks for TemporalX is a constantly evolving process. With our initial release we did a comparison between processing the same workloads in the same environment on go-ipfs and with TemporalX. To read about that check out our [medium post](https://medium.com/temporal-cloud/temporalx-vs-go-ipfs-official-node-benchmarks-8457037a77cf). | ||
Our primary source of benchmarks for TemporalX comes from our CI builds, where with every new release, we run the same set of benchmarks. The idea behind this is that we can see performance changes in real time, and catch performance regressions as they happen, not months or weeks later. All of our CI benchmarks come from the **amazing** [gobenchdata](https://github.com/marketplace/actions/continuous-benchmarking-for-go). While our setup is a little convoluted due to a closed source repository, consisting of running the CI benchmarks, and manually uploading them to GitHub Pages, gobenchdata is an all around extremely solid benchmark recording tool. This limitation of a convoluted process is only because github actions don't work very well with private repositories. | ||
|
||
From then on we have been maintaining a set of generalized benchmarks through a public github pages site on the TemporalX repository located [here](https://rtradeltd.github.io/TemporalX/) which uses [gobenchdata](https://github.com/marketplace/actions/continuous-benchmarking-for-go) to capture new benchmarks with each release. | ||
# CI Benchmarks | ||
|
||
The following are benchmark samples that we capture during our CI processes to ensure consistent performance overtime, and catch performance regressions as they happen: | ||
|
||
* [Reference Counter Benchmarks](https://rtradeltd.github.io/counter/) | ||
* These run benchmarks against the two types of reference counters we have | ||
* Measure GC, Delete, Put times | ||
* [TemporalX Benchmarks](https://benchx.temporal.cloud/) | ||
* These runs TemporalX's benchmarks | ||
* Measures a variety of the API calls | ||
|
||
# Published Reports | ||
|
||
Every once in awhile we will publish reports/analysis of the performance you get with TemporalX: | ||
|
||
* [TemporalX vs Go-IPFS](https://medium.com/temporal-cloud/temporalx-vs-go-ipfs-official-node-benchmarks-8457037a77cf) | ||
* Initially captured when we first released TemporalX, we've gotten even faster since then. Once go-ipfs v0.5.0 is released we will do a follow-up to this benchmark | ||
* You can expect 7x-10x performance gains when fully leveraging gRPC benefits | ||
* You can expect 3x performance gains when leveraging gRPC like HTTP (connect to server, send request, disconnect) | ||
* [TemporalX Replication vs IPFS Cluster](https://medium.com/temporal-cloud/nodes-w-built-in-replication-high-performance-security-consensus-free-6657ac9e44ea) | ||
* TemporalX replicated the same dataset in 128 seconds | ||
* IPFS Cluster replicated the same dataset in 1020 seconds |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
# DHT | ||
|
||
TemporalX uses a forked version of `go-libp2p-kad-dht` (we call `libp2p-kad-dht`), as well as `go-libp2p-kbucket` (we call `libp2p-kbucket`). The main reason for forking these libraries is so we can have more control over what changes we bring in, and when we have to deal with breaking changes. Additionally it allows us to perform some optimizations, and codebase cleanups without having to worry about it being accepted into the upstream codebase before we use it. | ||
|
||
Recently (v0.7.0) the `go-libp2p-kad-dht` library went through some major refactors, however it's not clear how thoroughly tested these changes are, outside of in-repo unit testing, and basic testing within testground (the go-ipfs testing library). Additionally none of these changes will be thoroughly tested within go-ipfs for quite some. As such this has lead to us needing to define a consistent versioning scheme for our fork. | ||
|
||
| Repo Name | Our Version | Upstream Max Version | Notes | | ||
|-----------|-------------|----------------------|--------| | ||
| libp2p-kad-dht | v1 | v0.5.2 | current version used by TemporalX | | ||
| libp2p-kad-dht | v2 | v0.6.1 | next version to be used by TemporalX | | ||
| libp2p-kad-dht | v3 | v0.7.0 | not clear if this will be used by TemporalX | | ||
|
||
# Notes | ||
|
||
v0.7.0 of upstream represents some significant changes to the way the DHT library works. It introduced the concept of "Dual DHT" a WAN/Public DHT, and a LAN/Private DHT. Personally speaking I'm not sure this is the "right way" to fix things with the libp2p DHT. Additionally it is a pretty major change to roll out, which from what I can tell, hasn't been thoroughly tested, nor benchmarked at scale. At the time this documentation is being written, it's unclear whether or not we will move to this system. We will be moving to our v2 of `libp2p-kad-dht` eventually, as this brings us closer to a more realistic kademlia style DHT. | ||
|
||
# Notable Changes | ||
|
||
* The biggest change by far is the removal of `jbenet/goprocess` instead using `context.Context` + `context.CancelFunc` directly. | ||
* Removal of `ipfs/go-log` | ||
* Switching to `prometheus` for metric tracking |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
# gRPC Based Blockstore | ||
|
||
The [`Blockstore`](https://github.com/ipfs/go-ipfs-blockstore/blob/master/blockstore.go#L35) interface is a thin wrapper around the `Datastore` interface that abstracts the bridge between IPFS CID's, and the underlying datastores which provide the key-value storage for on-disk storing of IPFS data. As with the `DAGService` interface, the `Blockstore` is widely used across any projects that want to use IPFS, and is perhaps more widely used of an interface than the `DAGService` interface is. | ||
|
||
By using the [`RemoteBlockstore`](https://github.com/RTradeLtd/go-ipfs-blockstore/pull/7) module, you can swap out internals of a variety of projects in the IPFS ecosystem to not have to run a blockstore locally, and instead deffer all blockstore functionality to a remote TemporalX node |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,9 @@ | ||
# gRPC Based DAGService | ||
|
||
One of the core interfaces for using IPFS is the [`DAGService`](https://github.com/ipfs/go-ipld-format/blob/master/merkledag.go#L54) interface, that specifies how you can get, and add IPLD nodes both to the network, and your local blockstore. | ||
|
||
[dag_service.go](../go/dag_service.go) showcases a working example of you can use can use the `NodeAPIClient` generated gRPC client as a way of satisfying `ipld.DAGService` using TemporalX. As such this will allow you to swap out existing `DAGService` implementations used by clients such as go-ipfs, or ipfs-lite for one that relies on a remote TemporalX server. | ||
|
||
By using this module you will not be required to run a DAGService locally, and can instead delegate all processing to a remote TemporalX server via the `Dag` RPC call. This is particular interesting for use with things like `go-ds-crdt` as well as running TemporalX in resource constrained environments while being able to fully leverage the resources of a more powerful, remote TemporalX service. | ||
|
||
For an example of how this is used, please consult [s3x](https://github.com/RTradeLtd/s3x/pull/40) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,45 @@ | ||
# Memory Constrained Environments | ||
|
||
In certain situations you may be running TemporalX in a memory constrained environments. When using default configurations, TemporalX is already supremely memory efficient, however there are certain situations where you may want to scale up this memory efficiency, say on a raspberry pi. Although efficient on memory, the configurations in this guide will provide a somewhat noticeable impact to performance, and as such is only recommended in memory constrained environments. | ||
|
||
# What Counts As "Memory Constrained" | ||
|
||
For the purpose of this documentation, we define memory constrained as embedded devices, ultralite laptops, and SoC boards such as the Raspbery Pi. For a guideline of what configurations you should apply based on system specifications consult the table below. | ||
|
||
| Available RAM | Recommended Configurations | | ||
|---------------|----------------------------| | ||
| 1GB | All configurations | | ||
| 4GB | Persistent DHT, Datastore Peerstore, Main Datastore | | ||
| 4GB+ | Above 4GB you probably dont need to follow these recommendations | | ||
|
||
# Configuration Recommendations | ||
|
||
## Persistent DHT | ||
|
||
* By enabling persistent DHT mode, you no longer store the entire DHT subsystem in-memory, and instead store it on disk. Through our profiling and benchmarking we observed this as being a significant memory hog, and were able to vastly reduce memory consumption by running persistent DHT | ||
* This will provide a noticeable slow-down to DHT queries, as you will need to fetch them from disk as opposed to in-memory | ||
|
||
## Datastore Peerstore | ||
|
||
* By enabling datastore peerstore, you no longer have to store tens of thousands of peerIDs, records, and associated information in memory | ||
* This will provide a noticeable slow-down to peerstore queries as you will need to fetch them from disk as opposed to in-memory | ||
|
||
## Queueless Reference Counter | ||
|
||
* The queue based reference requires multiple different components that increase memory consumption | ||
* If you insist on using the queue based reference counter, don't use more than 1 worker | ||
|
||
## Filesystem Keystore | ||
|
||
* Use the filesystem keystore which stores keys directly on disk, as opposed to the krab keystore which uses badger, and encrypts your keys, or the the in-memory keystore | ||
|
||
## Main datastore | ||
|
||
* For the main datastore it is recommended that you use leveldb, as this is the most memory efficient datastore available | ||
* Alternatively you can use the badger datastore with `fileLoadingMode` set to `0` | ||
|
||
## Low Power Mode | ||
|
||
* Lower power mode is primarily used to set sensible defualts for LibP2P settings if there are none set in the configuration file | ||
* If you do have LibP2P settings in the configuration file, then low power mode will be ignored | ||
* In the future we will be phasing out usage of low power mode in preference of using the config file entirely |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
# Multi Datastore | ||
|
||
TemporalX has a novel concept of "multi-datastore" compared to the reference implementations of IPFS, which use one mega blockstore separate by key namespaces. Although in certain places we use key namespaces, the act of namespaced keys isn't free in cost. There is a slight overhead, which under most hobbyist circumstances is not that great, however at production scale workloads, overheads add up. To mitigate some of these overheads, we leverage multiple datastores for different purposes. With the exception of persistent DHT mode that namespaces around the main storage datastore, all other stores go into separate datastores. | ||
|
||
When enabled, the following subsystems use dedicated datastores: | ||
|
||
* Keystore | ||
* Peerstore | ||
* Reference counter store | ||
|
||
When enabled, the following subsystems share the same datastore using namespaced keys: | ||
|
||
* DHT (note: by default dht is stored in memory) | ||
* Main storage store | ||
* Provider queue |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters