This kept 6 parallel plots running at any given time, keeping both SSDs independently busy and writing their results out to different SATA drives.
This was not resulting in particularly fast times and a lot of latent CPU, so I fired up mdadm on Linux and create an RAID 0 disk out of the two SSDs, formatted it as XFS, mounted it with discard (thanks @Quindor ) and now I’ve done test plots of everything from single-plots (for a baseline) up to to 7-parallel plots.
The 7 parallel plots were excruciatingly slow - about 2x as long to complete as the plots done 2 at a time.
When I sit and watch the disk it’s mostly doing a lot of ~1-200 MiB/s reads and writes… very rarely do I see a 1.5 GB spike.
Using Gparted (Ubuntu Disks apps) I benchmark the disk and read speeds sit at around 6 GB/sec with write speeds being somewhat garbage at 3-500 MB/sec sustained.
I’m wondering if I’m forcing the software RAID setup to respond to so many commands that I’m actually slowing down processing and if folks would recommend against it.
FWIW - I never had to turn bifurcation on in my BIOS - that is one of the perks of this card - which makes me think it’s attempting to do some intermediation of the drives that are mounted to it (a basic controller) and I’m wondering if I’m saturating that.
I wish I could get my hands on an ASUS Hyper V4 but those are on backorder for infinity and the ones on eBay are 4x what they used to be.
Anyway - just looking to learn from folks that have been here and done this. I’m going to continue testing and sharing what I find as I go; just slow to do when the time between sample points is 6hrs
With software RAID in Unix environments can you still TRIM? That’s my primary concern. Without that you lose performance over time and reduce lifespan.
Yikes - I didn’t even know this was a problem so I started digging and it seems it IS still a problem (with a hacky work around). Thanks for the heads up @leadfarmer !
Great question - there is a 180 reply thread of my adventures getting this build up and off the ground and everything I suffered through to get it working
I have an 11-th Gen Intel (11700k), so the PCIe 4.0 slot works, but there is only 1 of them (that’s the PCIe slot I have the riser in).
PCIe 3.0 has a max throughput of 15.75 GB/sec @ x16 - so I don’t think I’m hamstringed there.
I can’t use either of the other 2 M.2 slots lower in the board (that support Intel RapidStorage RAID) because then it disables SATA5 and 6 ports and I lose 2 of the drives I have in this build.
Then maybe instead of the M.2 splitter use a PCIe slot for more SATA ports and use those native M.2s. Seems this card isn’t getting you as good performance, or just try the other slots and see what you get.
In the article you linked to it shows how to see if your cards are exposed to use trim/discard so I guess you could test to see if it still works. Check out some info here as well: Chia Plotting SSD Buying Guide -. If you have the M.2s in native slots and use mdadm instead I would think discard should work appropriately - at least that’s what a bunch of people seem to be doing…
Also even the PCIe slot adapters aren’t perfect-so far those drives run about 23m slower than the native M.2 drives, imagine what that Syba card is doing…
They work ok I guess. Not thrilled about the 23m delay, but can’t really complain at this price. This was only over 8 plots though so we’ll see if the delay continues or if the cards even out.
Thank you for the links - ordered both to have them “in stock” when I need them.
I have 3 plots I’m letting wrap up, then I’ll pull the riser card and throw the drives down into those 2 M.2 slots for the time being and see if the performance picks up.
Besides the TRIM problems, all you are doing with the Raid 0 is making sure that the slowest drive (the one that has the highest WA and thus has to move a lot of data around to write, is garbage collecting or doing some other maintenance at the moment) is dictating the pace of all your drives.
I would not use RAID 0 on SSD’s, unless you do not have the space to fit a single plot on a single one (e.g. you have 120GB drives)
The last 2 plots are wrapping up here in about 20 mins, I’ll pull the riser and put the SSDs directly on the mobo and plot to them individually to see how that looks speed-wise.
@Yae you are a genius… I’m serious SIGNIFICANTLY faster and consistent read/write cycles now that I’m plotting in parallel to separate drives.
This was 1-200mb consistently, forget 3GB/sec jumps.
Now I’m not sure if the riser card is to blame or the software RAID, but either way I’m happy.
I am going to do 1 more test after this of using Intel RST to put the 2 drives into a RAID 0 array in the bios, that’ll allow me to fit 1 more parallel plot onto the array and then I’ll plot and time that.
@leadfarmer you are right, the speeds settled down to somewheree between 3-600, but that’s about 2x what I was seeing with that PCIe riser card and RAID 0 - so definitely happy about this.
Yup, unless you have some fancy MLC drives, the TLC only have so much they can give before their sustained write plummets to the low speed. Thankfully the 980 you said you are running is great even for sustained write, even though TBW is only 1,200TB. Only better drives would be enterprise MLC at that point.