Plot time creation on hdd really slow?

Hi Guys.
CPU X5660 ( xeon 6core , 12 threads) 50gb ram.

Windows server.

Plotting on nvme disk ( 4 threads , 4gb ram) took around 9 hours.
Plotting (1 plot per disk - 4 threads , 4g ram) on two different hdd disk (usb 3.0) took around 24hours or more. (target disk is different disk).

I see that hdd have utilisation around 100% still. (cpu and ram are below 50%).

But i saw comments that people do plots on hdd much faster?

Is anything that i can speed up with plot time with HDD?

Yes, this is expected, hard drives are so much slower than NVMe and SATA SSDs.

1 Like

Do you think that will be worth to build raid 0 with 2 hdd to speed up this time ?
As i have onboard hardware raid controller. But currently all disks are full with plots so hard to check :wink:

Yes definitely! Raid arrays of spinny rust hard drives is a great idea, a way to build plots faster that doesn’t burn through NVMe cells.

1 Like

I don’t know your situation and don’t wanna be out of contest but instead of watching nvme and hdd performance the main problem is the rig(server in ur case). How much time you need to reach 32%(31% is all about cpu performance then is ram and ssd/hdd performance)

hummmmm if doing a single plot at a time yes but trying to do lots of plots (parallel) in raid = slowwwww. The heads of the drives are all over the shop.

For example my 24 drives the best I could get in any raid was 25 plots a day vs 38 in non raid mode.

2 Likes

Results on nvme - 3plots in same time . some of them with 1,5h difference with start time.
Can be old cpu reason maybe. But i tried with more threads than 4 but i didnt saw much more performance.
With 8 threads per plot my server took almost 100W power more (i have possibility to add second xeon x5660 to it and have then 24 threads than 12 ) and i gain only 1 hour faster than with 4 threads so its not worth time vs electricity cost.

Get enterprise HDDs. I can get a k32 plot per seagate exos in about 7.5 hours (and that is being limited by cpu, if i only do one it’s around 6.5 hours).

Using JBOD I can do 16 in parallel in about 11.5 hours avg (and maybe 12 max). Some staggering. All direct SAS backplane w LSI SAS controller, no USB.

These are 2gigahertz procs:

plot1.log:Total time = 40519.503 seconds. CPU (106.970%) Wed May 26 09:54:10 2021
plot1.log:Total time = 41658.441 seconds. CPU (107.070%) Wed May 26 21:37:23 2021
plot2.log:Total time = 41470.850 seconds. CPU (106.720%) Wed May 26 10:10:31 2021
plot2.log:Total time = 41840.015 seconds. CPU (106.890%) Wed May 26 21:57:51 2021
plot3.log:Total time = 44406.543 seconds. CPU (100.640%) Wed May 26 10:59:57 2021
plot3.log:Total time = 44243.793 seconds. CPU (100.550%) Wed May 26 23:31:07 2021
plot4.log:Total time = 41374.674 seconds. CPU (106.240%) Wed May 26 10:09:55 2021
plot4.log:Total time = 41615.152 seconds. CPU (106.180%) Wed May 26 21:54:47 2021
plot5.log:Total time = 41345.625 seconds. CPU (106.470%) Wed May 26 10:24:26 2021
plot5.log:Total time = 42009.870 seconds. CPU (106.400%) Wed May 26 22:14:31 2021
plot6.log:Total time = 41576.419 seconds. CPU (106.880%) Wed May 26 10:28:47 2021
plot6.log:Total time = 42262.881 seconds. CPU (106.250%) Wed May 26 22:23:18 2021
plot7.log:Total time = 41601.136 seconds. CPU (106.420%) Wed May 26 10:29:42 2021
plot7.log:Total time = 42258.423 seconds. CPU (105.930%) Wed May 26 22:24:05 2021
plot8.log:Total time = 41608.655 seconds. CPU (105.740%) Wed May 26 10:30:20 2021
plot8.log:Total time = 42287.111 seconds. CPU (105.610%) Wed May 26 22:26:43 2021

If you do RAID I’d do the biggest stripe size you can…

Wow, but price of one exos disk is higher than whole my hardware :laughing:

How many disks u have? Are they 10k or 15k rpm?

These are all 7200 RPM SATA, this machine has 24 disks, using 16 to plot (1 per phys core but running 2 threads per plot proc) and 8 desk disks. Years-old/used SAS2 backplane JBOD. Single LSI SAS JBOD controller. Seeing similar results under Linux for 2u 8 and 2u 12 disk Supermicro E5-ish boxes and Xeon 55xx/56xx, but the 55xx/56xx take a few hours longer I think (don’t really have any cranking right now).

For real object storage storing actual data I’d do one controller per 12 disks (or 8 if SSD), but chia plotting if spaced right and each disk kept a separate I/O and throughput zone is relatively mild vs. high volume object storage, Usenet, or especially DB.

1 Like

If you are creating plots, one-by-one, then a RAID 0 for 2 HDDs will probably run in close to half the time, as the two drives will share the reads and writes equally (or close to that).

But if you run 2 plots in parallel, then I suspect that it would be no different than having no RAID, and assigning one plot to each drive.

So there is no real benefit, because you can either:

  1. Build two plots, one on each drive, and that will take 24 hours (resulting in 2 new plots in 24 hours).
  2. Build two plots, via two drives in a RAID 0, and that will 24 hours (resulting in 2 new plots in 24 hours).
  3. Build 1 plot, via two drives in a RAID 0, and that will take 12 hours (resulting in 2 new plots in 24 hours).

And I am guessing at the times. I cannot test this. But I am probably in the ballpark.

  1. Seriously they are not that expensive.

I’ve found this approach useful too and we at chia-plotting.com are using it…
are you under linux ? xfs filesystem and big block size (4kb+) have helped a lot too !

  • if you are using it only for plotting a old nas server with raid of 10krpm disks can be found cheap on ebay!

Im on windows, but its not big problem to switch, how much % time you gained on that?

I am testing 7x HDD SAS 15K 300gb not in raid with 7 jobs. Each jobs has his HDD. With 12core Xeon at 2.5ghz and 32Gb ram. First shot I am at 5h for 18% will see what’s happening. It’s only for learn and test :slight_smile:

2 Likes

Started with Raid 0 ( 2 x 7200rpm 3.5’) vs 1 usb 3.0 disk 2.5’ 5400rpm.
I will post my results soon.( i hope :smiley: )

I would disagree about RAID on spinning rust - i have tested a RAID 0 with 6 disks running 6 parallel plots and it is about 30% SLOWER than 6 plots on 6 drives running separately (but only ONE plot per drive at any time)

Yes RAID 0 helps in throughput but it does not help with random access too much and parallel plotting tends to thrash all the drives heads

I just started a Xeon (v3) plotting 1 per drive - SAS/7200RPM in about … 14.5-15 hours + destination write - 8 drives in parallel - i am estimating (but will confirm) about 12-14 plots per day - CPU IS ABOUT 30-35%

I only started it today so it is an estimate - i have 10x10K 900GB on the way next week and am hoping for a tiny bit more (these are 3TB am testing so a waste for plotting) - yeah its not as fast as SSD but my plan is to use it for refreshing my farm with poolable plots once my collection of disks is full

I can’t say it is the same with slower or faster disks but these hdparm around 155 MB/sec

1 Like

May i ask if raid was sas ? Had DDR cache? And how big the cache was ?