-
Notifications
You must be signed in to change notification settings - Fork 106
Bladebit 2 User Guide (WIP)
Bladebit CLI's syntax has changed to a command based format to accomodate for the different plotting methods now included, and to allow for some helper tools to be included as well. If you are familiar with CLI tools such as git
, docker
, lldb
, etc. then this should feel right at home.
The new format is as follows:
bladebit [GLOBAL_OPTIONS] <command> [COMMAND_OPTIONS]
Where GLOBAL_OPTIONS
are options that may apply to multiple commands, or to the whole application state. COMMAND_OPTIONS
are options which are specific to the specified command
and they must be to typed immediately following the command
.
To learn more about a specific command, type bladebit help <command>
or bladebit -h <command>
.
Use the diskplot
command to use the new disk plotting feature. Type bladebit help diskplot
or bladebit diskplot -h
to learn more about the diskplot
command and all its options.
To begin using the diskplot
command, you will need your farmer public key, and one of your pool public key (for OG plots) or a pool contract address (for pool plots).
You can obtain those values by doing the following with the official chia CLI:
# To get farmer public key and pool public key
chia keys show
# To get your pool contract address
chia plotnft show
You will also need at least a 500GB drive of temporary storage for plotting.
As an example, you can start plotting with the minimum configuration required like so:
./bladebit -f <farmer_pub_key> -c <pool_contract_address> diskplot -t1 <temporary_dir> <output_dir>
Where you would replace <farmer_pub_key>
and <pool_contract_address>
with your farmer public key and your pool contract address, respectively, <temporary_dir>
and <output_dir>
with paths to a temporary work directory and a final output directory, respectively.
With this command, bladebit will immediately begin creating a plot. However, the default config is far from ideal as it will default to using the maximum available CPU threads for all phases.
Bladebit now contains integrated benchmarking tools for disk and memory using the same commands that are used in plotting, but can be done quickly to profile a system. iotest is for testing the storage
For Linux
./bladebit -t 1 iotest -s 32G /mnt/ssd
For Windows
.\bladebit.exe -t 1 iotest -s 32G D:\temp\
memtest is for testing memory bandwidth
./bladebit -t <thread count> memtest -s 32G
generally best practice to start out with -t (threads) at 2 less than the total thread count on your system, to reserve some threads for background and io. The default buckets is currently set to 256
./bladebit -t <thread count> -f <farmer key> -c <contract address> diskplot -t1 /mnt/ssd /mnt/hdd
you can add DRAM cache to reduce SSD writes with --cache <amount> <units>
an example below
./bladebit -t <thread count> -f <farmer key> -c <contract address> diskplot --cache 32G -t1 /mnt/ssd /mnt/hdd
there are currently options to tune the threads in each phase
The optimal system configuration will be an enterprise SSD and 128GB of DRAM. With >99GB of cache in alternate mode, all of temp2 will be cached in ram. 64 buckets will generally be the fastest since more data per bucket is getting transferred from the SSD to host memory. 64 buckets require around 11GB of system memory plus whatever is left for cache
./bladebit -f <farmer key> -c <contract address> -t <thread count> diskplot -t1 /mnt/ssd/ -b 64 --cache 99G -a /mnt/hdd
thread count should be set to whatever the maximum threads of the system minus 1 (total threads - 1) to reserve one thread for io thread. Additional tuning can be done on threads per phase, but if storage is not a bottleneck this will generally be the fastest setting.
if you installed the latest Chia client, bladebit2 is included. The command line is changed from the binary to match the existing chiapos and madmax support as much as possible.
chia plotters bladebit2 -f <farmer key> -c <contract address> -r <thread count> -u 64 --cache 100G --alternate -t /mnt/ssd -d /mnt/hdd
currently the alpha with default 256 buckets is
- 1.414 TB read per k=32 plot
- 1.313 TB written per plot
- WAF = 1.076 (measured on P5510 3.84TB)
example bandwidth targets (will change in beta). burst bandwidth (from iotest) needs to currently be about 3-4x this to meet the plotting time requirement
10 minute plot needs average bandwidth of 1313.5 GB / 600 seconds = 2.189 GB/s average write bandwidth. This can be achieved on a 32 core machine
30 minute plot time = 1313.5 GB / 1800 seconds = 730 MB/s average write bandwidth