What are you using in Linux to move the plots to another device?

Im using Rsync with no compression and the transfer speeds starts at 120MB/s and drops to 43MB/s. This is painfully slow. Any tweaks or other tools to do this?

rsync --remove-source-files --recursive --progress --verbose --rsh ‘ssh -oPort=22 -i ~/.ssh/id_rsa’ /mnt/md0

You need to look at all factors:

  1. Max network throughput
  2. Bus speed of target drive
  3. Target drive write speed
  4. SSH overhead

Use iperf3 to verify throughput.
USB 2.0 max. is about 40 MB/s. How is the drive connected?
What is the brand/model of the drive?
NFS, for example, will likely be faster than SSH.

You’ll need to answer those questions and dig up the specs.


I have mounted the final directory via NFS. I have a gigabit switch between the plotter (10th gen intel NUC) and the storage device (Synology NAS with WD red disks in JBOD). A finished plot is moved in 900s-1000s for an effective ~100MB/s transfer rate. I am not using any tool for moving the final plots, letting chia plots create handle it for me.

Hope this helps.

Hi, I started to use rsync as you but with the same drop speed problem. After that I shared the final’s hdd’s vía samba server (protected by username and password). Now the ploter use samba client to access mounted directories linked to the final HDD, I always mount this folders at startup.

I’m having similar speed as vrinek.


A No.2 Philips Head screwdriver. Moved 16TBs of plots in about 2 minutes. Sometimes the fastest way is to move the drive physically.

Now… if only I had a hotswap caddy…


I’ve added the following to ~/.bash_alieses and then you can do mv_plots /mnt/tmp/*.plot /mnt/dst/00/ to copy the files with limited thoughput and uses only idle cpu/io ressources.
rsync also stores the file as a invisible temp file (starts with . and not ends with .plot)

mv_plots() {
  local files=${@:1:$(($#-1))}
  local dst=${@: -1}
  local bwlimit=${BWLIMIT:-80M}
  echo "files: ${files}"
  echo "dst: ${dst}"
  echo "bwlimit: ${bwlimit}"
  for file in ${files}; do
    echo "mv ${file} ${dst}/"
    { \
      nice -n 19 ionice -c 3 rsync --info=progress2 --info=name0 --bwlimit=${bwlimit} ${file} ${dst}/${file##*/} && \
      rm ${file} ; \
    } || { echo " -> failed" ; return 1 ; }
  return 0

You can override the BWLIMIT for the current shell session with a export BWLIMIT=100M before your mv_plots command.

1 Like

Since my post I have found that this works for me with a sustained 111MB/s including write to a USB3 HDD

rsync -avAXEWSlHh** --remove-source-files USERNAME@SOURCEIP:/SOURCELOCATION/ /DESTINATION --no-compress --info=progress2

1 Like

FWIW, I have been physically moving USB connections between the plotter and farmer. I have the farmer NFS mount the plotters drives during plotting so that I don’t lose a proof opportunity while an entire drive is being plotted. When the drive is full I unmount it from the farmer, then the plotter, then I move it and re-mount it on the farmer.

For various reasons I had a few drives on the farmer that weren’t quite full. But with my 16 port USB hub, drives don’t attach in the same sequence each time. So I was going to have to go through a bunch of efforts to identify which physical drives were the ones I needed to move over to the plotter to fill up.

For giggles I decided to make the destination drive on MadMax be an NFS drive on the farmer (mounting in the opposite direction). My plotting time went from 3,000 seconds to 3,170 seconds. The final step of copying to destination was obviously much longer, but I didn’t really care as it wasn’t any longer than what it would take me copy the file after I was done plotting (I got 86 MB/s ave via MadMax final copy step)

Plotter → NFS over Gb Network → Farmer → USB 3.0 Drive

I basically was willing to take the 5% plotting time hit to avoid the hassle of having to move it afterwards.

1 Like

Is this still the latest suggestion to move plots across the network?

How about setting final destination plot location on a different device’s storage thats within the same network?

If you’re using 1gbps networking and transferring over the same ethernet port that you farm on, would be a good idea to add --bwlimit=90000 or so to prevent the transfer from interrupting your network traffic and causing loss of sync/stales. The rsync command I use is rsync -Pav --bwlimit=90000 --remove-source-files --skip-compress=plot /path/to/source user@

1 Like

Over the network is fine as long as you can move a plot in less time than it takes to generate a plot. I currently have an NFS share exported from my harvester to my plotter. I’m using madMAx and setting finaldir to the NFS share.

Before that, I would use scp (in my own wrapper script) to copy plots over the network.

Before that, I would hook an empty drive up to my plotter, fill it up, and move it over to my harvester.

Whatever works!

1 Like

Thanks, @gryan315 script worked, however it does say its going to take 2hrs to move a single plot over. I currently plot at 55mins per plot T_T

That’s about 16 MB/sec. What are you writing to on the receiving end? Is it connected via USB 2.0 or something? Have you tested the speed of your network?


Do you have 1 Gbit/s Network? Then 100 mb/s is already the max.

You also need proper switches to handle that load.

That should be 100MB/s (or rather ~112MBps), right?

Those times are long gone. For a long time, virtually every $20 1Gbps switch is capable of having 1Gbps full duplex, plus a lot can carry that load on separate connections. Although, some USB to Ethernet dongles are really junk. The biggest work is done on a specialized chip that costs $0.2 or so (not much room to save and at the same time have bad reviews).

I would say that SSH / SCP overhead may be one of the biggest factors (especially, on the plotter side).

The receiving device HDD is connected through USB 3. Its an external storage.

All my devices are connected to my internet modem, but all have cabels CAT6.

I don’t have a switch, was considering converting one of my older modems to a switch, not sure this would work.

Im not sure how to check my network, im guessing this is a good start?: Network Throughput - What is It, How To Measure & Optimize for Speed!

@Chia.Switzerland Thanks

Scp between servers. Setup with its own screen. Works like a charm. Did over 12 tb with Scp secure copy. Took a few days. But I did successfully moved all plots over network.

All you need is cat 6 if all ur devices are 1gigabit. Getting a better cable will not help.

1 Like

You would need to write more about your setup for anyone to understand where your problems may originate. What k-value are your plots?

Either you have something really badly configured, or you have H/W problem. Assuming that your router is not H/W failing, it should not be a choking point, as it has a switch chip basically the same as any other $20 pure switch (which is plenty good).

If your cables are just 1-2m long, it really doesn’t matter whether they are Cat 5 or 5e or 6 (it does matter when you are pushing 100m). I would try to rotate or replace your cables.

Although, the first thing you may want to do is scan your /var/log files for potential errors (on both sides). The same thing with dmesg (e.g., what speed are your HDs mounted with).

On the destination box, I would run hdparm to test that HD r/w speed. Or you may want to put one plot on the main HD/SSD (SATA) and push it to that USB drive, to get some idea whether it works at full speed.

I would also just NFS mount that HD to your plotter, as you can just mv those plots (but will be choking farming bandwidth, as mentioned above).

Lastly, if all else fails, the “screwdriver” method mentioned above is a good choice (I do that). Although, I would install chia on that box, and set it up as a harvester only. This way, you don’t need any keys on that box (just start setup, once UI shows, you can kill it, and use CLI to run your harvester (almost, as you need to follow this guide)).

1 Like