In my experiments of Raid vs Non-Raid with parallel plotting, I launched 8 plotters on a single 2TB NVMe. No stagger, all at the same time.
This morning all the plots finished. Which means that the temp size of each plot is a little less than 250GB? Not 256GB like they say? This will change some things if true. I’ll have to check how much over the 2TB drive is in available space.
Good news nonetheless
i started some in parallel with CLI to plot 30 each and after +10 plots, noticed that one of the plots was in phase 4 while others were in phase 2.
cant figure out why some are faster than others
but in a 2TB i tried 6 plots and some didnt finished.
can be that when some space is emptied, those that are in a wait situation continue?
I can confirm similar result, but with 1 hr staggering.
Have you checked the time on each phase to make sure they go in and out of each phase at the same time. I wonder if there may be one plot is done and moved quicker than the other, so you are left with additional space
From previous graphs, I think the most temp space used was during Phase 2. Here is the raw data:
* ID/ P1Time/ P2Time/ P3Time/ P4Time/ Total
* P1/ 10842.140/ 4493.490/ 9966.966/ 727.942/ 26030.549
* P2/ 10841.362/ 4480.111/ 9979.603/ 823.886/ 26124.975
* P3/ 10841.759/ 4489.266/ 9971.376/ 809.529/ 26111.944
* P4/ 10933.817/ 4576.067/ 9940.881/ 622.537/ 26073.314
* P5/ 10841.662/ 4480.609/ 9972.784/ 829.329/ 26124.395
* P6/ 10841.392/ 4472.430/ 9981.166/ 830.461/ 26125.462
* P7/ 10877.909/ 4527.068/ 9927.701/ 787.200/ 26119.893
* P8/ 10877.564/ 4514.503/ 9939.940/ 798.677/ 26130.698
What kind of drive are you using? when I was running 5 plots staggered on my 2TB drive I didn’t get times close to that. (Gigabyte Aorus pcie4) that was running in the windows GUI though
From the graphs shared it looked like the full size of the temp space was only a brief moment during phase 2. But your times are so consistent that they should all be at that same point at the same time…odd
Yep at the end of phase 2.
I think the best way is to measure it - reference
I have not done it myself, but with dirty math here it is.
Assume linear increase of space in phase 2 (which is oversimplified),
Start 160GB --> End Z GB? ; Z is the max size possible of the "on-going" plot size
floor(min(phase 2)) = 4472
space = 160 + (Z-160)*T/4472 ; where T is phase 2 time elapsed
The most critical is when 8 plotters reach the apex at the end of phase 2.
The slowest plotter has not reach its apex so it uses slightly less space. The critical point is:
max(phase2) - min(phase2) = 95.96 s = (Z-160)*95.96/4472 GB max differences
Find the real space at the end of phase 2:
8*Z + (Z-160)*95.96/4472 = 2000
Z = 249.8 GB which is less than 250GB ; dirty conclusion
To do the proper experiment, I would:
- Run the single plotting 10 times and compare the disk used in the same graph > to see if there’s variance. we know that final K32 final plot size has variant size, but I’m not sure about the “on-going plot size”
- Run the 8x plotting 10 times and and compare the disk used in the same graph
- Or just study the code, which I do not have enough skill
- Focus on that 10GB up and down variance at the end of phase 2!
- Log the data in smallest time increment
BTW Your CPU is so powerful.
This is with the pcie3 inland premium 2TB NVMe. I used a powershell script to kick them all off at the same time.
CPU is the 5900x. Plotter settings were 3thread, 3416 RAM (accidentally forgot to change the ram).
Is that about the same per 24hr period if you run staggered? I’ve tried quite a few different combo’s running through a single gen3 NVME and can’t get much quicker than 1plot per hour avg. Pretty sure its a hard cap of Gen3 / single limitation.
I just noticed that the units on the graph is MegaBytes. Not MebiBytes. So indeed it’s less than 250GB per plot.
The other weird thing to me is that from that graph, a plot actually doesn’t write that much data.
I mean throughout the whole process it stays around 75 MB/s which is really not al lot. That also matches the reported 1.8 TB of data written per plot over a 6 hour period.
Also when I look at windows perfmonitor I see that 3 plots running parallel don’t actually cause very high write speeds and often even drops down to 0.
So why do we need fast writing SSD’s ? a SATA ssd can write 500 MB/s
So we (or at least I) am missing a piece of the puzzle here.
That is for a single plotting.
When you plot 8x in parallel for example, SSD becomes a limit - more read more write and so on.
However, super crazy fast SSD (3-4GB/s++) won’t be beneficial neither, because I believe at that point CPU becomes a bottleneck.
I do not see the point of going beyond 1000MB/s. Most of the rig is 8x parallel and as you said one plotter utilizes under 100MB/s. Super fast SSD may help everything smoother, but at significant more cost.
i did 4 plots on a 1 tb drive, they were staggered but the drive never looked even close to full
right now i have 3 plots running on a 1tb, 30%, 62% and 95% complete, drive says 366GB free of 953GB
I have two of those NVMe’s so the way I normally plot is with 12 plotters (6 on each drive, staggered 40 minutes). I get about 3.6 TiB/ day. The plot times range from 25000 to 28000 on the 12 plotters.
The times are like that though because I’m RAM limited and take a time hit for using 2400 RAM per plotter.
Can you please share a screenshot of your settings when you next time start plotting?
I am also building my pc with
SSD: Intel DC P4500 SERIES 4.0TB 2.5I SSD
I am a bit confused when selecting ram and threads.
If I select Ram 3390 and 3 threads that mean this is for each individual plot?
Check this thread one guy is making 21 plots in parallel is he assing 2 thread for each plot?
Correct, settings per plot
Seems a bit too good to be true, haven’t seen anyone else getting that many plots out of a 5900x
Normal wisdom would say 18 plots maximum, guess 21 with good timings is possible. But unlikely you will get under 5 hour plot time like he says.