In my experiments of Raid vs Non-Raid with parallel plotting, I launched 8 plotters on a single 2TB NVMe. No stagger, all at the same time.
This morning all the plots finished. Which means that the temp size of each plot is a little less than 250GB? Not 256GB like they say? This will change some things if true. I’ll have to check how much over the 2TB drive is in available space.
I can confirm similar result, but with 1 hr staggering.
Have you checked the time on each phase to make sure they go in and out of each phase at the same time. I wonder if there may be one plot is done and moved quicker than the other, so you are left with additional space
What kind of drive are you using? when I was running 5 plots staggered on my 2TB drive I didn’t get times close to that. (Gigabyte Aorus pcie4) that was running in the windows GUI though
From the graphs shared it looked like the full size of the temp space was only a brief moment during phase 2. But your times are so consistent that they should all be at that same point at the same time…odd
Yep at the end of phase 2.
I think the best way is to measure it - reference
I have not done it myself, but with dirty math here it is.
Assume linear increase of space in phase 2 (which is oversimplified),
Start 160GB --> End Z GB? ; Z is the max size possible of the "on-going" plot size
floor(min(phase 2)) = 4472
space = 160 + (Z-160)*T/4472 ; where T is phase 2 time elapsed
The most critical is when 8 plotters reach the apex at the end of phase 2.
The slowest plotter has not reach its apex so it uses slightly less space. The critical point is:
max(phase2) - min(phase2) = 95.96 s = (Z-160)*95.96/4472 GB max differences
Find the real space at the end of phase 2:
8*Z + (Z-160)*95.96/4472 = 2000
Z = 249.8 GB which is less than 250GB ; dirty conclusion
To do the proper experiment, I would:
Run the single plotting 10 times and compare the disk used in the same graph > to see if there’s variance. we know that final K32 final plot size has variant size, but I’m not sure about the “on-going plot size”
Run the 8x plotting 10 times and and compare the disk used in the same graph
Or just study the code, which I do not have enough skill
Focus on that 10GB up and down variance at the end of phase 2!
Is that about the same per 24hr period if you run staggered? I’ve tried quite a few different combo’s running through a single gen3 NVME and can’t get much quicker than 1plot per hour avg. Pretty sure its a hard cap of Gen3 / single limitation.
The other weird thing to me is that from that graph, a plot actually doesn’t write that much data.
I mean throughout the whole process it stays around 75 MB/s which is really not al lot. That also matches the reported 1.8 TB of data written per plot over a 6 hour period.
Also when I look at windows perfmonitor I see that 3 plots running parallel don’t actually cause very high write speeds and often even drops down to 0.
So why do we need fast writing SSD’s ? a SATA ssd can write 500 MB/s
So we (or at least I) am missing a piece of the puzzle here.
That is for a single plotting.
When you plot 8x in parallel for example, SSD becomes a limit - more read more write and so on.
However, super crazy fast SSD (3-4GB/s++) won’t be beneficial neither, because I believe at that point CPU becomes a bottleneck.
I do not see the point of going beyond 1000MB/s. Most of the rig is 8x parallel and as you said one plotter utilizes under 100MB/s. Super fast SSD may help everything smoother, but at significant more cost.
I have two of those NVMe’s so the way I normally plot is with 12 plotters (6 on each drive, staggered 40 minutes). I get about 3.6 TiB/ day. The plot times range from 25000 to 28000 on the 12 plotters.
The times are like that though because I’m RAM limited and take a time hit for using 2400 RAM per plotter.
Seems a bit too good to be true, haven’t seen anyone else getting that many plots out of a 5900x
Normal wisdom would say 18 plots maximum, guess 21 with good timings is possible. But unlikely you will get under 5 hour plot time like he says.