RAID 0 or multiple drives

Hi what is the argument for RAID 0 versus separate temp drives.
I am seeing slow downs when i have a single SSD with more than 3-4 phase 1 plots
Assuming this will improve with raid 0 - or just adding another drive to split an 8 phase 1 job load
i am thinking you get better job control per temp drive

One argument is basically that you might end up with enough space to fit in another plot run. If you find you can only manage 3 parallel plots on a 960GB SSD, you might find that two in RAID0 can handle 8 if you stagger nicely.

I’m not sure what effect RAID0 has on plotting time on an individual plot level. My perspective is that it’s about throughput (plots/day), not how long any given plot takes.


Using plotman I am watching each phase seconds to complete, if i go over 4 parallel plots with phase 1 on a Corsair MP600 2TB i see completion times for phase 1 and 2 go up. I am also offloading phase 3,4 & 5 onto slower SATA III 7200 rpm disks, which has shown to be reducing the contention on the first temp drive and therefore improving seconds to complete for these phases, especially if the temp2 is on the same as the destination drive. So currently out of a single temp drive as described i’m looking at 20+ plots a day with a stagger of 90 minutes. still tinkering :grinning:


I too have the MP600 but can´t get nowhere near 20 a day. What do you mean with offload phase 3, 4 and 5 (copying?)

And by doing stagger of 90 minutes wouldn´t you only be able to make 16 plots? Since 1:30 x 16 = 1 day.

1 Like

I would go with RAID0, I tested on mine. RAID0 2x500GB will be faster than a 1Tb drive.
However if you using hardware raid controller you need a good one.

1 Like

On one of my systems I have two Samsung 980 pros 500gb in raid-0. I am able to plot four plots in parallel with the addition of a temp 2 drive.

Each plot finishes about five hours flat.

looks like i was too optimistic stating 20 a day, most ive managed now is 14 but trying another config now with 4 concurrent jobs - 3 in phase 1 and zero stagger.
when i mentioned offload phase 3,4 and 5 - for phase 3 and 4 from monitoring disk activity it is mainly performing reads and writing to a tmp file. This temp file can be directed via the -2 option or temporary2 variable to the same drive as the destination. If your destination is like mine, local SATA III 7200 then the bandwidth is sufficient and phase 5 then simply becomes a rename taking zero seconds to complete.

Yeah, me too, 14 is tops. Tried your config on setting a local destination in Temp 2 but looked waaay slower, since some plots got into eachother in the disk, tried to make 5 with nice stagger.