Opinion on [1x 2TB RAID 0] or [2x 1TB] Drives

Quick Context

  • 2x Samsung 980 Pro 1TB M.2 NVME Drives
  • PCIe 3.0 Riser Card (link)
  • ASUS TUF Gaming Z590-Plus Mobo
  • Plots required 270GB of temp space.
    • 3 plots per 1TB drive.
    • OR
    • 7 plots per 2TB (RAID0) drive.

Originally I was plotting:

  • Parallel [3x] → /SSD1 → /SATA1
  • Parallel [3x] → /SSD2 → /SATA2

This kept 6 parallel plots running at any given time, keeping both SSDs independently busy and writing their results out to different SATA drives.

This was not resulting in particularly fast times and a lot of latent CPU, so I fired up mdadm on Linux and create an RAID 0 disk out of the two SSDs, formatted it as XFS, mounted it with discard (thanks @Quindor ) and now I’ve done test plots of everything from single-plots (for a baseline) up to to 7-parallel plots.

The 7 parallel plots were excruciatingly slow - about 2x as long to complete as the plots done 2 at a time.

When I sit and watch the disk it’s mostly doing a lot of ~1-200 MiB/s reads and writes… very rarely do I see a 1.5 GB spike.

Using Gparted (Ubuntu Disks apps) I benchmark the disk and read speeds sit at around 6 GB/sec with write speeds being somewhat garbage at 3-500 MB/sec sustained.

I’m wondering if I’m forcing the software RAID setup to respond to so many commands that I’m actually slowing down processing and if folks would recommend against it.

FWIW - I never had to turn bifurcation on in my BIOS - that is one of the perks of this card - which makes me think it’s attempting to do some intermediation of the drives that are mounted to it (a basic controller) and I’m wondering if I’m saturating that.

I wish I could get my hands on an ASUS Hyper V4 but those are on backorder for infinity and the ones on eBay are 4x what they used to be.

Anyway - just looking to learn from folks that have been here and done this. I’m going to continue testing and sharing what I find as I go; just slow to do when the time between sample points is 6hrs :slight_smile:

With software RAID in Unix environments can you still TRIM? That’s my primary concern. Without that you lose performance over time and reduce lifespan.

1 Like

Why not just use the native M.2 slots? With that riser you’re dropping down to PCIe gen 3 instead of 4.

What CPU are you using?

Yikes - I didn’t even know this was a problem so I started digging and it seems it IS still a problem (with a hacky work around). Thanks for the heads up @leadfarmer !

Great question - there is a 180 reply thread of my adventures getting this build up and off the ground and everything I suffered through to get it working :slight_smile:

I have an 11-th Gen Intel (11700k), so the PCIe 4.0 slot works, but there is only 1 of them (that’s the PCIe slot I have the riser in).

PCIe 3.0 has a max throughput of 15.75 GB/sec @ x16 - so I don’t think I’m hamstringed there.

I can’t use either of the other 2 M.2 slots lower in the board (that support Intel RapidStorage RAID) because then it disables SATA5 and 6 ports and I lose 2 of the drives I have in this build.

Then maybe instead of the M.2 splitter use a PCIe slot for more SATA ports and use those native M.2s. Seems this card isn’t getting you as good performance, or just try the other slots and see what you get.

In the article you linked to it shows how to see if your cards are exposed to use trim/discard so I guess you could test to see if it still works. Check out some info here as well: Chia Plotting SSD Buying Guide -. If you have the M.2s in native slots and use mdadm instead I would think discard should work appropriately - at least that’s what a bunch of people seem to be doing…

1 Like

Also funny enough I bought the same card, but went with my two native M.2 slots and two more in PCIe gen4 adapters (only needs a 4x slot I believe).

Also even the PCIe slot adapters aren’t perfect-so far those drives run about 23m slower than the native M.2 drives, imagine what that Syba card is doing…

Damn ok now you got me thinking… out of curiosity which cards did you buy and are you relatively happy with them?

Hadn’t thought about picking up a PCIe SATA card… good call.

Low Profile PCI-E 3.0 x4 Lane to M.2 NGFF M-Key SSD Nvme AHCI PCI Express Adapter Card

They work ok I guess. Not thrilled about the 23m delay, but can’t really complain at this price. This was only over 8 plots though so we’ll see if the delay continues or if the cards even out.

Yep I agree - don’t worry about losing SATA ports on your motherboard, use those M.2 slots instead. You can get better features with add-on cards anyway - here’s the card I’m using for extra SATA ports and I love it. It is PCI-e 4x (instead of 1x like a lot of the cheaper SATA cards): Amazon.com: Ableconn PEX-SA156 6-Port SATA 6G PCI Express x4 Host Adapter Card - AHCI 6Gbps SATA III Port-Multiplier PCIe 3.0 4-Lane Low Profile Controller Card (ASMedia ASM1166): Computers & Accessories

And it supports port multipliers, so pick up six of these and you can hang 30 SATA drives off of this single card! These also support hot-swapping! Amazon.com: PUSOKEI 1 to 5 Sata Port Multiplier for SATA Expander Hard Disk Riser Card 6.0Gbps, for SATA III 1 to 5 Expansion Card for WinXP/Win7/Win8/Win10: Electronics

Here’s my beast running with 20+ SATA hdd’s literally hanging from this card :laughing: Show off your rigs! - Chia Plotting - Chia Forum

1 Like

That’s scrappy as hell, I LOVE IT!

Thank you for the links - ordered both to have them “in stock” when I need them.

I have 3 plots I’m letting wrap up, then I’ll pull the riser card and throw the drives down into those 2 M.2 slots for the time being and see if the performance picks up.

1 Like

Besides the TRIM problems, all you are doing with the Raid 0 is making sure that the slowest drive (the one that has the highest WA and thus has to move a lot of data around to write, is garbage collecting or doing some other maintenance at the moment) is dictating the pace of all your drives.

I would not use RAID 0 on SSD’s, unless you do not have the space to fit a single plot on a single one (e.g. you have 120GB drives)

1 Like

Thank you Yae - I liked the way you phrased that.

The last 2 plots are wrapping up here in about 20 mins, I’ll pull the riser and put the SSDs directly on the mobo and plot to them individually to see how that looks speed-wise.

@Yae you are a genius… I’m serious SIGNIFICANTLY faster and consistent read/write cycles now that I’m plotting in parallel to separate drives.

image

This was 1-200mb consistently, forget 3GB/sec jumps.

Now I’m not sure if the riser card is to blame or the software RAID, but either way I’m happy.

I am going to do 1 more test after this of using Intel RST to put the 2 drives into a RAID 0 array in the bios, that’ll allow me to fit 1 more parallel plot onto the array and then I’ll plot and time that.

3 Likes

That 3GB number is not sustainable long term. I suspect that decreases to around 1100MiB/s as time goes on.

The speeds you mean?

Yes they def slowed down but the utilization chart looks better for them and thr CPU so I think it’s net better

@leadfarmer you are right, the speeds settled down to somewheree between 3-600, but that’s about 2x what I was seeing with that PCIe riser card and RAID 0 - so definitely happy about this.

Yup, unless you have some fancy MLC drives, the TLC only have so much they can give before their sustained write plummets to the low speed. Thankfully the 980 you said you are running is great even for sustained write, even though TBW is only 1,200TB. Only better drives would be enterprise MLC at that point.

hello
Dual M.2 NVMe Ports to PCIe 3.0 x16 Bifurcation Riser Controller - Support Non-Bifurcation Motherboard SI-PE 4 gen available ??