Setup for plotting with WD Velociraptor HDDs

Hi everyone.

After building a nice farmer/mediocre plotter out of 2nd hand parts I got bold and started to overstretch my PC building skills and plan its bigger plotter brother.

I wanted to try going the HDD plotting path so I bough 10 cheap 2nd hand WD Velociraptor 10K 500GB HDDs (WD5000HHTZ) which I plan to connect to an Adaptec 71605.

I’ve tested plotting solo before on a Velociraptor WD1000DHTZ (same generation as WD5000HHTZ but 1TB) in the farmer machine with Core i5 4690S 3.2GHz and got about 8.6 hours. I’m considering buying a i7 or i9 at about the same clock speed with 8 cores and see what I can get in parallel, also benefiting from the shared bandwidth of the 10 SATA drives in RAID0.

I’ve seen SlothtechTV recommend two RAID0 sets with half the drives each for a completely different setup in this thread: Xeon e5-26xx slow plotting - #13 by SlothtechTV

I’m curious what would be the rational for splitting instead of a single RAID0, if that’s anything specific for that particular setup, or if this setup could benefit also.

Any additional insights are welcome.

RAID 0 scaling is not linear. Some depends on your exact hardware, but normally 2x2 drives will be faster than 1x4 drives. At a certain point, adding and extra drive to the pool does not improve performance at all.
Any raid 0 beyond 2 drives should only be considered if other requirements than speed demand it (e.g. plot fitting), In your case, I would certainly try with 1 plot per disk, no RAID and compare that to 1 plot per 2disk in RAID 0. You will need a processor upgrade though as you current is just 4C/4T.

https://www.tomshardware.com/reviews/RAID-SCALING-CHARTS,1635-9.html

1 Like

That’s very helpful Yae. It’s clear now that bundling 10 drives probably isn’t the way to go. Tom’s hardware test shows throughput improvement up to 5 drives and then they hit some bottleneck. I’ll try one plot per drive and set that as a baseline as you suggest.

I don’t think I get the Interface Bandwidth graph in that link, it shows more or less constant bandwidth for all RAID0 setups.

The way I read that is that while clearly the increase to up to 30ms latency in the worst case of the 8 drive setup would harm random reads (access time is the time between your computer sending the request for data to the disk array and the time it starts receiving the first byte), the array is still able to throughput data comparably to the other once it gets going (you ask for a large block of sequential data, the disk array once it starts sending the data back can deliver it at this throughput rate, the amount of data requested divided by the time between the first and the last byte delivered)

2 Likes