Perfomance issues 5900x 2x1.6TB P4610

Hi, I think I am not maximizing the plotting performance of my current rig and I’m looking for a little bit of help.
Rig Specs;
Ubuntu 20.4, AMD 5900x, 64GB ram, 2x 1.6TB P4610 (not in raid), 8 X Toshiba 7200RPM Present drives currently pushing out 3.38TiB per day. I think I should be closer to 4/4.5?

I think that my issue is some misunderstanding of my cli config, this is as per below;

screen -d -m -S chia1 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 0h && chia plots create -k 32 -b 3500 -e -r 12 -u 128 -n 128 -t /mnt/ssd1/temp1 -2 /mnt/ssd1 -d /mnt/hdd1 |tee /home/jason/chialogs/chia1_1_.log'
screen -d -m -S chia2 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 1h && chia plots create -k 32 -b 3500 -e -r 12 -u 128 -n 128 -t /mnt/ssd1/temp2 -2 /mnt/ssd1 -d /mnt/hdd2 |tee /home/jason/chialogs/chia2_1_.log'
screen -d -m -S chia3 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 2h && chia plots create -k 32 -b 3500 -e -r 12 -u 128 -n 128 -t /mnt/ssd1/temp3 -2 /mnt/ssd1 -d /mnt/hdd3 |tee /home/jason/chialogs/chia3_1_.log'
screen -d -m -S chia4 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 3h && chia plots create -k 32 -b 3500 -e -r 12 -u 128 -n 128 -t /mnt/ssd1/temp4 -2 /mnt/ssd1 -d /mnt/hdd4 |tee /home/jason/chialogs/chia4_1_.log'
screen -d -m -S chia5 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 4h && chia plots create -k 32 -b 3500 -e -r 12 -u 128 -n 128 -t /mnt/ssd2/temp5 -2 /mnt/ssd2 -d /mnt/hdd5 |tee /home/jason/chialogs/chia5_1_.log'
screen -d -m -S chia6 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 5h && chia plots create -k 32 -b 3500 -e -r 12 -u 128 -n 128 -t /mnt/ssd2/temp6 -2 /mnt/ssd2 -d /mnt/hdd6 |tee /home/jason/chialogs/chia6_2_.log'
screen -d -m -S chia7 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 6h && chia plots create -k 32 -b 3500 -e -r 12 -u 128 -n 128 -t /mnt/ssd2/temp7 -2 /mnt/ssd2 -d /mnt/hdd7 |tee /home/jason/chialogs/chia7_2_.log'
screen -d -m -S chia8 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 7h && chia plots create -k 32 -b 3500 -e -r 12 -u 128 -n 128 -t /mnt/ssd2/temp8 -2 /mnt/ssd2 -d /mnt/hdd8 |tee /home/jason/chialogs/chia8_2_.log'

I thought that r 12 would max out my cpu and writing, but this means I’m actually only running 8 plots in parallel per drive and splitting the work of temp across 2 drives. Should I add another few plots and decrease the r? Just staggering them to repeat on a few lucky drives?

Help! :slight_smile: Bonus question, how can I gracefully shut down my current plotting on Ubuntu? As to not lose plots?
Final question when i restart should I reduce the n number of plots to size left over or will it just fail once the drive is full anyway?
I know this is a lot of helped asked for a noob, but any guidance would be helpful.
Thanks!

Phase 1 only supports multithreading and there are diminishing returns for -r higher than 6. If you do 16 parallel plots (“8 plots in parallel per drive”) with -r 12 then you are using 192 threads with a 24 thread CPU if you don’t stagger enough. You can overpopulate cores, but not like this :crazy_face:

3.38TiB are 3,7TB and I am getting 3,5TB with 32GB ram and 3900X on Windows with a slightly faster NVMe. I don’t think 4.5TiB (~5TB) is realistic with your build.

I would also go for -r 4 and 6 parallel plots per drive so 12 in total to match your CPU. Give them 4000mb ram and a stagger of 30-40min. If you can Raid0 your Intels and use xfs.

1 Like

To add to @tr0x 's response. Take out the “-e” flag, this disables bitfield and throws a different set of limits on the system.

1 Like

In addition to what has been said -

Your use of the -2 option isn’t really doing anything from a performance perspective. It is locating the construction file for the plot in a different directory but on the same device as the other tmp files. From this location there is a file copy to your -d HDD after the file has been constructed. This copy typically takes about 10 minutes (-k32) to a HDD (assuming one file being copied, slower if parallel copies occuring). You can avoid this file copy by setting the -2 option to the same value as the -d option after which it will simply rename the file rather than having to perform a copy/move. Another option would be to use a destination directory on your SSD (again set -2 and -d to same location but on the SSD) to avoid writing to the HDD in the plot loop. Then use a separate script to move the file to the HDD location. This removes all writes to the HDD from the plotting loop. Others have used a second SSD as a staging destination from which they script the move to the HDD.

Thanks for the quick response. So general practice would be choose 2 destination drives and full them up first then go onto the next? Given your feedback (and once I figure out how to raid my SSDs in ubuntu, my 2.0 script should look something like this;

screen -d -m -S chia1 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 35m && chia plots create -k 32 -b 4000 -e -r 4 -u 128 -n 128 -t /mnt/ssdRAID/temp1 -2 /mnt/ssdRAID -d /mnt/hdd1 |tee /home/jason/chialogs/chia1_RAID_.log'
screen -d -m -S chia2 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 35m && chia plots create -k 32 -b 4000 -e -r 4 -u 128 -n 128 -t /mnt/ssdRAID/temp2 -2 /mnt/ssdRAID -d /mnt/hdd1 |tee /home/jason/chialogs/chia2_RAID_.log'
screen -d -m -S chia3 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 35m && chia plots create -k 32 -b 4000 -e -r 4 -u 128 -n 128 -t /mnt/ssdRAID/temp3 -2 /mnt/ssdRAID -d /mnt/hdd1 |tee /home/jason/chialogs/chia3_RAID_.log'
screen -d -m -S chia4 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 35m && chia plots create -k 32 -b 4000 -e -r 4 -u 128 -n 128 -t /mnt/ssdRAID/temp4 -2 /mnt/ssdRAID -d /mnt/hdd1 |tee /home/jason/chialogs/chia4_RAID_.log'
screen -d -m -S chia5 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 35m && chia plots create -k 32 -b 4000 -e -r 4 -u 128 -n 128 -t /mnt/ssdRAID/temp5 -2 /mnt/ssdRAID -d /mnt/hdd1 |tee /home/jason/chialogs/chia5_RAID_.log'
screen -d -m -S chia6 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 35m && chia plots create -k 32 -b 4000 -e -r 4 -u 128 -n 128 -t /mnt/ssdRAID/temp6 -2 /mnt/ssdRAID -d /mnt/hdd1 |tee /home/jason/chialogs/chia6_RAID_.log'
screen -d -m -S chia7 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 35m && chia plots create -k 32 -b 4000 -e -r 4 -u 128 -n 128 -t /mnt/ssdRAID/temp7 -2 /mnt/ssdRAID -d /mnt/hdd2 |tee /home/jason/chialogs/chia7_RAID_.log'
screen -d -m -S chia8 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 35m && chia plots create -k 32 -b 4000 -e -r 4 -u 128 -n 128 -t /mnt/ssdRAID/temp8 -2 /mnt/ssdRAID -d /mnt/hdd2 |tee /home/jason/chialogs/chia8_RAID_.log'
screen -d -m -S chia9 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 35m && chia plots create -k 32 -b 4000 -e -r 4 -u 128 -n 128 -t /mnt/ssdRAID/temp9 -2 /mnt/ssdRAID -d /mnt/hdd2 |tee /home/jason/chialogs/chia9_RAID_.log'
screen -d -m -S chia10 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 35m && chia plots create -k 32 -b 4000 -e -r 4 -u 128 -n 128 -t /mnt/ssdRAID/temp10 -2 /mnt/ssdRAID -d /mnt/hdd2 |tee /home/jason/chialogs/chia10_RAID_.log'
screen -d -m -S chia11 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 35m && chia plots create -k 32 -b 4000 -e -r 4 -u 128 -n 128 -t /mnt/ssdRAID/temp11 -2 /mnt/ssdRAID -d /mnt/hdd2 |tee /home/jason/chialogs/chia11_RAID_.log'
screen -d -m -S chia12 bash -c 'cd /home/jason/chia-blockchain && . ./activate && sleep 35m && chia plots create -k 32 -b 4000 -e -r 4 -u 128 -n 128 -t /mnt/ssdRAID/temp12 -2 /mnt/ssdRAID -d /mnt/hdd2 |tee /home/jason/chialogs/chia12_RAID_.log'

Maybe I can keep the destination dirs as all the different HDD to help alleviate any stuck waiting for a copy issue.
Make sense?

And I missed the second response, thanks for the input about the -2, will modify the above script to do so. No too familiar with scripting on Ubuntu to do the seperate copy, more learning to do! Thanks

As @Blueoxx mentioned drop the -e

And I am not sure about the sleep 35m you put in every line, shouldn’t it be like in your first script? I am no bash guy but I think it should be sleep 35m […] sleep 70m[…]sleep 105m and so on? 1st one also doesn’t need a sleep imo.

You can also switch your hdd1 and hdd2 after every plot you start. It has no speed improvements but if one plot takes longer than 35min for copying somehow, they have a bit more time to finish.

1 Like

Perfect thanks (good catch on sleep) ! Any tips on how to exit current plotting gracefully? Other than just checking and killing as they are done.

If you want to optimize, stop using the Chia Decentral script and use Plotman or other similar tool.

You also do not need 12 threads. 4, maybe 6 max.

If you have that many drives, then use a different drive for -2

There’s no graceful way with your setup. Just kill them.

You are correct. It’ll go through each line right when the script starts so you want to increment by 35min each line.

I’m rocking P4510’s and have found them to perform better in RAID than individually.

If you want to keep using them individually, you could run sets of 2 plots started at the same time, one for each disk.

Agree with this, though 30 minutes may be too short a stagger - depending on your per-plot times.

Before you kill them as @roybot suggests, wait a moment to let me type out a way to kill your jobs right as the current plots finish :wink:

For each of your screens:

  1. Find the process ID of that screen (the number before .chiaN in screen -r)
  2. Find the ID of the last started plot in that screen (below the last Starting plotting progress into temporary dirs:[...] line in your logs)
  3. Find the plot-xxx.plot.2.tmp file with that ID in your -2 directory
  4. inotifywait -e delete_self /path/to/plot-xxx.plot.2.tmp && kill screen_pid &

This will kill the screen as soon as the second temporary file is deleted (i.e. when that plot is done). Don’t forget the & at the end or you’ll block your terminal until you Ctrl+C or the plot finishes. inotifywait is in the inotify-tools package on Ubuntu.

1 Like

Ok, you’re right. I was assuming they didn’t have any completed .tmp files for transfer :sheep:

1 Like

OK that’s just awesome, thanks! Thanks everyone for the tips and tricks - I’ll have a look at Plotman, another TODO that went up the list. :slight_smile:

As if on queue ChiaDecentral has a video on Plotman, I read the help but the video was also some more hand holding. Plotman is soo much better than scripts. Keen to get this going tonight.Thanks everyone!
Will post my results post optimisation.

I have a system with a similar setup.

3900x, 64gb ram, 1.6tb x2 Micron 7300 Max, Ubuntu.

Start 12 plots using two threads each, default ram. When these finish phase 1, start five more plots. Rinse and repeat, should pump out 4.4-4.7TiB a day.

Thanks I’ll try that too, the improvements so far got me up to 4.15TiB, will see what’s happening by the end of day.
Thanks all that helped.

@legcramp are you using Plotman too?

1 Like

Not yet I am at my PC working all day so I just run the next plot manually, I do plan to use plotman or one of the alternatives soon though.