What are you using in Linux to move the plots to another device?

Im using Rsync with no compression and the transfer speeds starts at 120MB/s and drops to 43MB/s. This is painfully slow. Any tweaks or other tools to do this?

rsync --remove-source-files --recursive --progress --verbose --rsh ‘ssh -oPort=22 -i ~/.ssh/id_rsa’ /mnt/md0 192.168.4.46:/media/ayahni/Plots41/

You need to look at all factors:

  1. Max network throughput
  2. Bus speed of target drive
  3. Target drive write speed
  4. SSH overhead

Use iperf3 to verify throughput.
USB 2.0 max. is about 40 MB/s. How is the drive connected?
What is the brand/model of the drive?
NFS, for example, will likely be faster than SSH.

You’ll need to answer those questions and dig up the specs.

2 Likes

I have mounted the final directory via NFS. I have a gigabit switch between the plotter (10th gen intel NUC) and the storage device (Synology NAS with WD red disks in JBOD). A finished plot is moved in 900s-1000s for an effective ~100MB/s transfer rate. I am not using any tool for moving the final plots, letting chia plots create handle it for me.

Hope this helps.

Hi, I started to use rsync as you but with the same drop speed problem. After that I shared the final’s hdd’s vía samba server (protected by username and password). Now the ploter use samba client to access mounted directories linked to the final HDD, I always mount this folders at startup.

I’m having similar speed as vrinek.

Regards.

A No.2 Philips Head screwdriver. Moved 16TBs of plots in about 2 minutes. Sometimes the fastest way is to move the drive physically.

Now… if only I had a hotswap caddy…

1 Like

I’ve added the following to ~/.bash_alieses and then you can do mv_plots /mnt/tmp/*.plot /mnt/dst/00/ to copy the files with limited thoughput and uses only idle cpu/io ressources.
rsync also stores the file as a invisible temp file (starts with . and not ends with .plot)

mv_plots() {
  local files=${@:1:$(($#-1))}
  local dst=${@: -1}
  dst=${dst%/}
  local bwlimit=${BWLIMIT:-80M}
  echo "files: ${files}"
  echo "dst: ${dst}"
  echo "bwlimit: ${bwlimit}"
  for file in ${files}; do
    echo "mv ${file} ${dst}/"
    { \
      nice -n 19 ionice -c 3 rsync --info=progress2 --info=name0 --bwlimit=${bwlimit} ${file} ${dst}/${file##*/} && \
      rm ${file} ; \
    } || { echo " -> failed" ; return 1 ; }
  done
  return 0
}

You can override the BWLIMIT for the current shell session with a export BWLIMIT=100M before your mv_plots command.

1 Like

Since my post I have found that this works for me with a sustained 111MB/s including write to a USB3 HDD

rsync -avAXEWSlHh** --remove-source-files USERNAME@SOURCEIP:/SOURCELOCATION/ /DESTINATION --no-compress --info=progress2

FWIW, I have been physically moving USB connections between the plotter and farmer. I have the farmer NFS mount the plotters drives during plotting so that I don’t lose a proof opportunity while an entire drive is being plotted. When the drive is full I unmount it from the farmer, then the plotter, then I move it and re-mount it on the farmer.

For various reasons I had a few drives on the farmer that weren’t quite full. But with my 16 port USB hub, drives don’t attach in the same sequence each time. So I was going to have to go through a bunch of efforts to identify which physical drives were the ones I needed to move over to the plotter to fill up.

For giggles I decided to make the destination drive on MadMax be an NFS drive on the farmer (mounting in the opposite direction). My plotting time went from 3,000 seconds to 3,170 seconds. The final step of copying to destination was obviously much longer, but I didn’t really care as it wasn’t any longer than what it would take me copy the file after I was done plotting (I got 86 MB/s ave via MadMax final copy step)

Plotter → NFS over Gb Network → Farmer → USB 3.0 Drive

I basically was willing to take the 5% plotting time hit to avoid the hassle of having to move it afterwards.