Direct comparison of 980 pros?

Now that 2TB 980 pros are disappearing off the face of the earth…has anyone done direct comparisons of 2x2TB 980 pros to say 4x1TB 980 pros in raid 0? Or how 2x2TB 980 pros would compare to 4x1TB WD850s, or 4x970 pro 1TBs? I’ve seen some conflicting things but not much in terms of specifics-curious to see people’s experiences.

Here @codinghorror reported using six (6) full bandwidth M.2 NVMe 980 Pro 2tb in hyper M.2s.
@chianudist Was using 2x2TB NVME on PCIe gen3 x4 lans, and 1 1tb NVME on gen 2x4 lane. (max 6 per 2TB NVME, 3/1 TB NVMe).
@Quindor was using 4x Corsair Force MP510 1TB in MDADM RAID0 with XFS
@Harris had 2x Samsung 970 Pro 1TB M.2 NVMe PCIe 3.0 Gen 3, and 2x Intel P3600 1.2TB AIC NVMe PCIe 3.0 Gen 3

The plot/day pulls were very different, and I’m sure CPU optimizations, OS, RAM, and various other things were playing into things. These vary from 20s-30s/day to >50/day, which makes direct comparisons hard.

So who has experience within the same setup and various NVME configs? Seen so many different things, curious as to some data points if people want to share.

I can say I’ve used 4x2TB inland platinums with each having dedicated gen4 full m.2 slots, and better slots don’t make crappy SSDs better-these are TERRIBLE-if you have them throw them away. I’m still getting 50m+ IO wait times running max 2 jobs/drive on a 5800x.

More drives is always better. Reason being over time that super fast buffer gets filled on every non-MLC drive and the drive slows down. 64K sustained write rerformance being equal, 4 x 1TB 980 Pro vs 1 x 4TB 980 Pro, I’d take the 4 x 1TB config every time. However it is a little more complex, because say you can do more plots in parallel on a 2 x 2TB config, because you can’t divide whatever the 250-290GB number into the 1TB efficiently. So maybe 2 x 2TB is a better config for this reason.

1 Like

Yeah but you’re not usually actually using 256 x8 at any one time if you’re staggering, which is why most people seem to be able to get away with more on a 2TB. Also if you RAID the drives you don’t have to worry about it. I also figured the 4x1 would be better because you’re tapping more PCIe lanes, but that’s just a thought, don’t know it to be true.

I’ve heard the 1TB drives do not have as large of SLC and DDR4 caching, but don’t know enough about the drives themselves.

Just to clarify, I’m running a 5900x with 4x 980 Pro 1TB, I never advocated the 2TB version. It’s better to have 4x 1TB then 2x 2TB. I also have a 5900x running wth 4x MP510 1TB. Although both perform great, the 4x 980 Pro 1TB system shows little to no IOwait anymore so performs best. :slight_smile:

1 Like

I’d argue if you have the staggering set up perfectly like this then the only disadvantage of a 4 x 1TB build is the longevity of the drives. 2TB drives usually rated for twice the longevity of 1TB. I believe those 1TB will fail sooner, but the 2TB would have twice the data written to them in the same time (theoretically), so it’s kind of a toss up there. Would be curious your disk health for all drives @Quindor.

TBW between 1TB and 2TB is exactly the same for the relative size. Having 2 drives (1TB minimum) is twice as fast though, without costing more.

image
Currently at 347TB written on this 980 Pro 1TB with 21% health used up. Extrapolating that still points towards 1700 actual TBW for the Chia plotting workload.

Having 4 in RAID0 that would mean I can write at least 6800TB to them, with 1.6TB per plot that would give me at least 4250 plots. And that’s just if the health would run down to 0, no clue when they’d really die.

But since I have multiple machines these SSDs will survive just fine (I don’t have 1PB of storage :P) for alternative purposes after that. :slight_smile: