Best way to plot with 12 threads?

Hi im using ryzen 5 3600 with 12 threads , 32gb ram and 1tb ssd on m.2
Im planing to plot 3 paralel plots with ,4gb ram and 3 threads by a plot !
Soo my plan is to put 3 plots in paralel with 12gb ram and 9 threads
And how many delay time shoud i set , is 1h enough?
Do you think this is safe or will i will overload my cpu ?
If u have better ideas im grateful to hear them , some reccomendations !
Thanks in advance

Forget the delay function. You can never correctly time how long plots take totally or by phase. Too many variables. Your system will eventually get overloaded and crash the plotting, ruining you day. Just queue them. Multiple queues with your own delays/timings.


Soo what exacly do u recomend , that i shoudnt plot paralel or ?

Multiple queues, just make your own delays. That is, set a queue going…wait…set a queue going…repeat for number of queues wanted. Voila, parallel plotting!

1 Like

I have a near identical setup currently:

Ryzen 3600
32GB RAM @ 3600 Mhz
2TB NVME 4.0 m.2

Best thing to do is run a single plot with the resource parameters that you wan’t to use (i.e. 4 threads and 4GB per plot) then take note of the phase completion times.
Phase 1 takes about 2 hours with my setup and those parameters.
If you plan to plot on only a single 1TB drive, you should be able to get away with 2 plots in phase 1 with 2 further plots in the later phases. To achieve this set your stagger/delay time to slightly more than half your phase 1 plot time.

I’ve partitioned my 2TB drive into two 1TB sections. This allows me to stagger two parallel plotters, each of four plots, with a stagger time of slightly greater than my expected phase 1 time. I stagger the second plotter to start just as the first plot in the first plotter has finished. It works out at just over 16 plots a day and uses max. 10 threads and roughly 12-13GB of RAM.

1 Like

I can’t vote this up enough - I’ve tried timing, even aggressively broad - 4x and always wake up to a hung process and have to kill everything.