Bladebit plotting at 220 - 280 plots per day on one system

I was thinking of writing a few lines on my plotting experience with bladebit on a dual Epyc 7513 with 1/2 T of ECC DDR4 configured at 2933 MHz. There is a brief yt video you can watch while one of the fastest K32 plots is achieved on that system.

Bladebit is run with 128 threads, which matches the total amount of hyper threads on that configuration. That server has also 10x 3.2 TB nvme drives installed, configured as 5x raid-0 volumes of 2 drives each (buffer1 through buffer5), but other than that no storage. It has to move the data via 10 GbE network to the actual farmer.

At 5:15 minutes per fully functional K32, and 6:28 minutes (slowest observed) that plotter produces over 220 and up to 280 K32 plots in 24 hours. I am going to use the lower end number for the next explanations so I do not need to keep doing ranges.

220x K32 plots are more or less 22 TiB of plot data that needs to be moved fast enough so the “ex-plotting” local nvme drives, the 5 buffers mentioned earlier, are not running full. To get a daily 22 TiB of newly created plot data from the plotter to the farmer, the average network load needs to be at 267 MiB/s. On my farmer the hard drives are all individually exposed via NFS. If you have mounted all hard drives into the same directory, for example /chia/plots/disk001, /chia/plots/disk002, /chia/plots/disk{nnn}, then the one line added to /etc/exports (/chia/plots 192.168.1.0/16(rw,no_root_squash,async,fsid=0,no_subtree_check,crossmnt) will do the trick.

One single hard drive cannot sustain storing 267 MiB/s of data, so you need at least 2 - 3 processes that offload plot data from the buffers to the farmer. In the end I came up with a small script that runs in a loop for 5 iterations producing 33 plots into a buffer before it moves on with making the next buffer full while kicking off a background job that moves the plots from the previous buffer to one of the exported farmer hard drives. Turns out that 18 TB hard drives can hold 165x K32 plots, that is 5 x 33 K32 plots…

This amazingly fast plotter, on this same system I managed to produce about 130x K32 using the stock plotter (chiapos) and 170x K32 using mad-max, can only be fully leveraged when you manage to get the newly created plot files fast enough onto the final hard drives, without jamming the plotting process. Bladebit requires about 416 GiB of free RAM to run, it has to first flush the memory buffers to storage before it can create a new plot. While bladebit is operating, it fully loads your system, all cores mostly indicate 100% load, it is then only slowed down by how fast it can store the previously created plot before continuing with a new one.

A closer look to the phases of bladebit and IO activity will actually show, that the process is not only storing the plot to disk at the very end, no, it does so as soon as it can, starting in phase 3, so when it reaches the final point of storing the plot to disk, only another estimated 30 GiB of the final file have to be written, amazing! So it is RAM plotting, but I chose to still use flash for my buffers before moving plots to final HDD storage.

As for testing the newly created plot files, same process as before I started to use the mad-max plotter. I extracted the plot id and memo from a bladebit plot and successfully recreated it using chiapos. Then I ran 20 million check iterations on the plot and found no significant or unexpected differences between a bladebit and chiapos plot. The small differences were discussed in length in a Chia keybase#plotting chat. I think the fair summary is: table entries that exceed the 32 bit address range are dropped, my understanding bladebit and mad-max both do that.

edit: I have meanwhile noticed that using 128 threads on this system of 64 cores / 128 threads, is “trashing” the system to the point where it slowed down network data transfer. Over the days plotting became faster than moving data to the farmer. I am going to restart the plotting process likely with fewer threads, don’t really know how many I need to spare for steady data transfer, I may tray 112 threads next. Meanwhile and since all buffers were full, I suspended the plotting process kill -STOP $(pidof bladebit), waited a few hours and resumed the plotting process kill -CONT $(pidof bladebit). The theory that bladebit was slowing down the network data transfer was immediately confirmed, as soon as the plotting process was suspended, data transfer rates multiplied 2-3x…

8 Likes

Amazing!!

Fantastic work.

Would you mind sharing that little script you wrote? I’d love to see how you’re doing this.

Don’t mind at all, use on your own risk. I truncated the repetitive aspects of the script. Lot’s of hard coded values in there as my setup is standardized. All farmer hard drives are 18 TB, 5 buffers of 5.5 TB each made of 2x 3.2 TB nvme flash drives in raid-0.

#!/bin/bash
log="/chia/logs/chia-plotter.log"

pool() {
  ./bladebit -t "$1" -n "$2" -f <farmer public key> -c <pool contract address> "$3" |& tee "$4"
}

rem_plots() {
  plots=$(find "$1" -type f -size +108700000000c |wc -l)
  rem=$((33 - plots))
  echo $rem
}

for i in {1..5}; do
  buff="/chia/scratch/disk01/"
  rem=$(rem_plots "$buff")
  echo "...info: $rem plots going to be created"
  if [ "$rem" -gt "0" ]; then
    echo 3 > /proc/sys/vm/drop_caches
    pool "128" "$rem" "$buff" "$log.disk01"
  fi

  disk="/chia/plots/disk098/"
  (rclone move "$buff" "$disk" --transfers 1 --include *.plot --min-size 101G) &
  echo "...info: kicked off rclone to move plots from $buff to $disk"
.
.
# And so on, 4 more blocks as above, for a total of 5 disks. Could have made it more granular...
.
.
done
1 Like

This guide is going to be much appreciated :smiley:

2 Likes

@xorinox thanks for sharing your script. it is very nice job u have done.
can you help me to create a script for my needs? because i am totally not good at scripting.

i have dual epyc 7551 (128 threads totall, 512 GB Ram) with 800 GB sas-3 ssd (4* 200gb in raid0).
with this raid0-setup i can save/hold 6 plots, which i want to use as a buffer-disk.

i would like to have the possibility to plot for e.g. 100 plots continuously with my 800gb raid0 as buffer disk, in the meantime starting rclone to final disk when the plot is finished and written on 800 gb raid0.
this means: transfer the plot 1 by 1 when it is finished, from buffer to final disk, without waiting of finishing 100 plots.

because i have used your script with little modifications and found out that the script stops when the given amount of plots is finished. this means rclone kicks in when all 100 plots are made.

my question:
is it possible to make (or modify your) script in such a way that it starts transferring the plot 1 by 1 as soon as it is finally written on buffer disk ??

Here’s a simple script to transfer plots 1 by 1 as they are plotted.

#!/bin/bash

# Watches for newly created plots and moves them to farmer 1 by 1.
# Requires inotify-tools: sudo apt-get install -y inotify-tools

# Directory paths
SRCDIR="/mnt/nvme1/"
DESTDIR="/mnt/storage/poolplots/"
# Limit transfer speed to minimise long lookup times whilst farming.
BWLIMIT="140000"

echo "Watching for new plots in: $SRCDIR"
echo "Copying plots to: $DESTDIR"

inotifywait -m $SRCDIR -e create -e moved_to |
while read path action file; do
    if [[ "$file" =~ .*plot$ ]]; then
        echo "Found new plot!"
        echo "Copying plot to $DESTDIR"
        rsync --bwlimit="$BWLIMIT" --preallocate --remove-source-files --skip-compress=plot --whole-file -avP "$SRCDIR/$file" "$DESTDIR/$file"
        echo -e "Copy complete, waiting for new plots...\n"
    fi
done
1 Like

thanks for your help and effort, but this script isn’t working.

./inotifywait.sh
Watching for new plots in:  *******************
Copying plots to:  ********************
Setting up watches.
Watches established.

Found new plot!
Copying plot to ****************************
sending incremental file list
sent 120 bytes  received 12 bytes  264.00 bytes/sec
total size is 0  speedup is 0.00
Copy complete, waiting for new plots...

Found new plot!
Copying plot to ******************************
sending incremental file list
sent 120 bytes  received 12 bytes  264.00 bytes/sec
total size is 0  speedup is 0.00
Copy complete, waiting for new plots...

what i did:
1.) run bladebit in terminal, as usual
2.) run your script in terminal, with little modification ( added --min-size=108700000000 in line of

rsync --bwlimit=“$BWLIMIT” –min-size=108700000000 --preallocate --remove-source-files --skip-compress=plot --whole-file -avP “$SRCDIR/$file” “$DESTDIR/$file”

You need not supply min-size parameter, as of the latest bladebit release plots are renamed from .tmp to .plot only upon completion and the script only looks for .plot files.

Where is your destination directory? Assuming you have write permissions and if it is not the local machine then you will need to supply SSH credentials to rsync.

1 Like

you are totally correct. i was using “old” version of bladebit. the dev has make some changes how bladebit now works without any mention on its github page or not providing changelog.

no, i don’t use SSH. it is all on the same machine. i don’t have problems with permissions.

but i have a question:
which is faster (RSYNC or RCLONE) when transferring plot files 1 by 1 ??

@Digital thanks for your nice script for transferring plot files.

rigth now i have 2 plot files finished on my buffer disk.

1st plot file is in transferring process to final hdd, while 2nd plot file is finished by bladebit and sitting there waiting for RSYNC to finish the transfer of 1st plot.

i wonder:
isn’t it possible to run 2 or more simultan file transfers (parallel rsync) from buffer to final disk, without waiting to finish 1st rsync ??

Anybody have any Bladebit experience on a dual e5 v2 system? I’m wondering what the output looks like. Trying to decide if it’s worth it for me to purchase another 256GB of RAM to switch to Bladebit. Currently with Madmax, I’m outputting around 50 plots per day.

I would love to know the electricity cost per plot, and the cost of the hardware to produce 220 plots/day.

1 Like

I would not invest in any more plotting hardware at this point, except maybe high endurance nvme but they don’t seem to be available in store anywhere. There are gpu and asic plotters coming up, that will make the cost of plotting very very very low. You are better off buying plots online from the cheapest providers than purchasing expensive hardware that will be useless in a few months. That is only my opinion, my 2 (chia) cents (my 2 mojos ?).

Where can I find any info on the GPU and asic plotters? I’d like to read about them. This is the first I have heard of them. I mean, I’m mostly already setup. Running a dual e5 2690 v2 with 256 GB of RAM. Around 50 something plots per day on this system. Was thinking about giving BladeBit a go on it. Just curious of what the times would be.

1 Like

I wouldn’t buy more plotting hardware at present, no. With Netspace not exploding, just creeping up a little as people finish replotting to NFT linked plots then get back to expanding their farm gradually. My plotters can and will be sold or repurposed when a good value ASIC/GPU solution becomes available or maybe not as I could just grow at 1-3 plots per day plotting using my farmer.

1 Like

There is no public information about them yet but you can be sure that they are being developped.

Managed to get 3.25m plot times

Niiiiice. What’s your setup?

2x7763s, 2TB 3200 RAM, 4xbladebit concurrent processes