Looking Linux script for plot moving from M.2 -> 3/4/5 SATA HDD

Ubuntu/BaleBit
Stuck with problem “how to DO NOT move the file started (but not finished) moved already to other HDD”
Looks like much easy idea to plot to different M.2 sub-folder 1/2/3/4 (by bash script to different destination)& run the scripts to move files one by one, directly from sub-folder to the destination HDD.
Without all that mass with checking is that .plot coping/not coping, finished/not finished, is it new/old etc.
Will appreciate for ant ideas/suggestion.

chia-plot-mover is what i used
if you are doing compress plots you will have to modify the mover.py under the src directory
line 15, change 105 to fit your finish plot size, it must be lower than the finish plot size

1 Like

Looks like you saved a few weeks of my life! :slight_smile:

Could also try Max’s plot sink/plot copy programs. Available for Windows or Linux.

chia-gigahorse/plot-sink at master · madMAx43v3r/chia-gigahorse (github.com)

I’m using them right now to move plots off of an external drive to a bank of other external drives. Works over a local network or on the same machine. Will do parallel copies to as many drives as the “sink” program is set up to do. Optionally deletes files after they are copied.

Thank you VERY much.

it doesn’t work on new Python versions though

#!/bin/bash

if [ "$#" -ne 2 ]; then
 echo "missing arguments './sync.sh src-dir dst-dir."
fi

src=$1
dst=$2
pl_s=110000000	# plot 110GB in 1K blocks as reported by 'df'

function df_dst {
 /sbin/df $dst --output=avail | awk '{if(NR>1)print}'
}


free_dst=$(df_dst)


while true
 do
  pids=""
  files=$(find "$src" -name "*.plot")

  for i in $files
  do
   pl_size=`/sbin/du "$i" | cut -f 1`
   #echo $pl_size
   if (( "$(df_dst)" >= "$pl_size" )); then
    ### rsync slow 250-380 MBs
    #rsync -vvv --progress --remove-source-files --preallocate --whole-file "$i" "$dst" &
    ### mv 500-750 MBs single file copy without two destinations
    mv -vvv "$i" "$dst" &
    pids="$pids $!"
    wait $pids
   else
    echo "no free space"
    exit 1
   fi
 done
done

my version of simple script…I never figured out “wait” till the plot is done in multiple instances…I created /tmp/c1 /tmp/c2 /tmp/c3 and then run the script 3x in parallel.

It is kludge, but it worked well. If you have multiple drives, simply create a stripped JBOD, and cut the middle man. I currently plot directly to 4x18-20TB drives with BB v3.1.0 ~4.8 min per plot @ 250W.

does it really work with BladeBit?

It is a file copy utility that copies any file(s), distributing them to the dirs the sink uses. Doesn’t have to be plots, or even on a system in the same time zone as a Chia farmer. I just made 64 random text files and moved them with plot sink/plot copy, no problems.

Starting to copy /tmp/153.file ...
Finished copy of /tmp/113.file (0.015625 GiB) took 0.057 sec, 280.702 MB/s
Finished copy of /tmp/146.file (0.015625 GiB) took 0.058 sec, 275.862 MB/s
Starting to copy /tmp/147.file ...
Finished copy of /tmp/127.file (0.015625 GiB) took 0.059 sec, 271.186 MB/s
Starting to copy /tmp/114.file ...
Starting to copy /tmp/128.file ...
Finished copy of /tmp/159.file (0.015625 GiB) took 0.056 sec, 285.714 MB/s
Finished copy of /tmp/141.file (0.015625 GiB) took 0.054 sec, 296.296 MB/s
Finished copy of /tmp/120.file (0.015625 GiB) took 0.057 sec, 280.702 MB/s
Starting to copy /tmp/121.file ...
Finished copy of /tmp/153.file (0.015625 GiB) took 0.054 sec, 296.296 MB/s
Finished copy of /tmp/134.file (0.015625 GiB) took 0.056 sec, 285.714 MB/s
Finished copy of /tmp/147.file (0.015625 GiB) took 0.039 sec, 410.256 MB/s
Starting to copy /tmp/135.file ...
Finished copy of /tmp/114.file (0.015625 GiB) took 0.054 sec, 296.296 MB/s
Finished copy of /tmp/128.file (0.015625 GiB) took 0.052 sec, 307.692 MB/s
Finished copy of /tmp/121.file (0.015625 GiB) took 0.034 sec, 470.588 MB/s
Finished copy of /tmp/135.file (0.015625 GiB) took 0.034 sec, 470.588 MB/s
Finished copy of /tmp/102.file (0.015625 GiB) took 0.833 sec, 19.2077 MB/s
Starting to copy /tmp/103.file ...
Finished copy of /tmp/103.file (0.015625 GiB) took 0.553 sec, 28.9331 MB/s
Starting to copy /tmp/104.file ...
Finished copy of /tmp/104.file (0.015625 GiB) took 0.013 sec, 1230.77 MB/s
Starting to copy /tmp/105.file ...
Finished copy of /tmp/105.file (0.015625 GiB) took 0.013 sec, 1230.77 MB/s
Starting to copy /tmp/106.file ...
Finished copy of /tmp/106.file (0.015625 GiB) took 0.011 sec, 1454.55 MB/s
Starting to copy /tmp/107.file ...
Finished copy of /tmp/107.file (0.015625 GiB) took 0.013 sec, 1230.77 MB/s
Started copy to /chia/sdd/spacefarmers/159.file (0.015625 GiB)
Finished copy to /chia/sde/spacefarmers/146.file, took 0.022 sec, 727.273 MB/s
Waiting for previous copy to finish or more space to become available ...
Started copy to /chia/sde/spacefarmers/141.file (0.015625 GiB)
Finished copy to /chia/sdf/spacefarmers/127.file, took 0.022 sec, 727.273 MB/s
Waiting for previous copy to finish or more space to become available ...
Started copy to /chia/sdf/spacefarmers/120.file (0.015625 GiB)
Finished copy to /chia/sdd/spacefarmers/159.file, took 0.019 sec, 842.105 MB/s
Waiting for previous copy to finish or more space to become available ...
Started copy to /chia/sdd/spacefarmers/134.file (0.015625 GiB)
Finished copy to /chia/sde/spacefarmers/141.file, took 0.018 sec, 888.889 MB/s
Waiting for previous copy to finish or more space to become available ...
Started copy to /chia/sde/spacefarmers/153.file (0.015625 GiB)
Finished copy to /chia/sdf/spacefarmers/120.file, took 0.018 sec, 888.889 MB/s
Waiting for previous copy to finish or more space to become available ...
Started copy to /chia/sdf/spacefarmers/147.file (0.015625 GiB)
Finished copy to /chia/sde/spacefarmers/153.file, took 0.017 sec, 941.176 MB/s
Waiting for previous copy to finish or more space to become available ...
Started copy to /chia/sde/spacefarmers/114.file (0.015625 GiB)
Finished copy to /chia/sdd/spacefarmers/134.file, took 0.021 sec, 761.905 MB/s
Waiting for previous copy to finish or more space to become available ...
Started copy to /chia/sdd/spacefarmers/128.file (0.015625 GiB)
Finished copy to /chia/sdf/spacefarmers/147.file, took 0.02 sec, 800 MB/s
Waiting for previous copy to finish or more space to become available ...
Started copy to /chia/sdf/spacefarmers/121.file (0.015625 GiB)
Finished copy to /chia/sde/spacefarmers/114.file, took 0.019 sec, 842.105 MB/s
Started copy to /chia/sde/spacefarmers/135.file (0.015625 GiB)
Finished copy to /chia/sdd/spacefarmers/128.file, took 0.018 sec, 888.889 MB/s
Finished copy to /chia/sdf/spacefarmers/121.file, took 0.017 sec, 941.176 MB/s
Finished copy to /chia/sde/spacefarmers/135.file, took 0.02 sec, 800 MB/s
Finished copy to /chia/sdc/spacefarmers/102.file, took 0.786 sec, 20.3562 MB/s

OP asked for a script to move files from a drive to a set of other drives. Nothing specific to bladebit.

Gigahorse does support plot-sink as a destination, and I doubt that Bladebit does, but the original question wasn’t asking for Bladebit integration as far as I could see.

The optimal technical solution would probably be something with rsync so you have resumability of copies, but that’s more complicated.

do you simply use destination @localhost where plot_sink runs in background?

I always wondered, never tried to use bladebit_cuda -d @localhost with plot_sink

Hello !

This is not yet finish, but we are implementing to our bladebit wrapper stagings directory’s.

It will allow you to plot on fast disk. Another thread will copy plots to directory’s in configuration while blad.

it’s not yet merged on master. The main goal of this wrapper is copy with concurent process and keep plotting by calculating space.

We are under 10g network, this should be taken in consideration.

Dunno if that could help

Sorry. I tryed your script, but it’s doing the same mistake, I want to avoid.
If I have .plot on my SSD and 3 destination HDDs and I’m starting 3 copy of your script with different destination, it’s immediately try to copy the first file to all 3 destination.
What i’m doing wrong?

can i see your config.yaml for chia-plot-mover
edit: and can I see your paths ubuntu comand “df -h”
also maybe permission error, try sudo sh start.sh

chia-plot-mover will always choose the drive with the most free space, it will never write to the same disk at the same time

huh! After few hours of battle with scripts I started the chia-plot-mover! (without virtual environment, but that because I’m not too familiar with bash scripts)
As I understand I need to manage start.sh
“#!/usr/bin/env bash
source .venv/bin/activate
python3 index.py”
That line source .venv/bin/activate generate error ./start.sh: 2: source: not found
But! working fine! (Ukrainian programmers are very good!) :slight_smile:
Could you, please, compare my speed for M2 ext4 ->HDD(sata NTFS) coping with yours.
" .plot moved, time: 589.1 s, avg speed: 138.0 MiB/s"
Looks like I need to install NTFS 3g. 140 MiB/s looks not too fast.

the speed will differ from mine, i’m using SAS drives formatted to xfs, but on average was getting 160MiB/s.
time wise will depend on your compression level for file size
but glad you got it working

1 Like

140-150 that’s roughly what I have. Will play with NTFS 3g. Will see.
Thanks a lot for your advise! It’s really helps.

because you missed the last part of my post :wink: I tried to do it simple, but it became too complex to program with iowaits and so on…

I had temp disk where I created two subdirs…or more, depending how many target drives.

then you run the script (I call it sync.sh)

screen -d -m ./sync /c-tmp/c1 /dest_hdd1
screen -d -m ./sync /c-tmp/c2 /dest_hdd2
screen -d -m ./sync /c-tmp/c3 /dest_hdd3

otherwise, it will copy the first plot to all your dest drives until it crashes because one drive may be faster, and finish first/erase plot :smiley:

1 Like

copy speeds are usually crap with NTFS kludge drivers.

do not use ext4 for temp disk

xfs mounted with nodiscard

When I use single drive…outer tracks of disk 220-270MBs; inner tracks/disk full 120-150MBs…usually 150MBs on modern drives.

Noted with thanks!
p.s. chia-plot-mover is a very nice solution, btw. Work with my python 3.10

it doesn’t work any more from python v3.11 if I recall…they use some obsolete python modules

formatted M2 as XFS, as you are suggested. Now:
INFO:Copy thread: Plot file /mnt/m22/XXX.plot moved, time: 505.2 s, avg speed: 161.0 MiB/s
161!!! was 130-140! Great! Thanks a lot!
& less 4 min per plot generation time. (was 4+ min)
XFS rules!!!

1 Like