Had to reload my two plotters

I have to bitch somewhere. I went to reconfigure my two plotting systems tonight. All I wanted to do was go from one NVME in each to two and then RAID 0 them. Well that didn’t go well. For one, the moment I plugged them in, somehow they corrupted my windows boot drive so bad, I couldn’t fix it. And the boot drive isn’t even one of the NVME drives I was working on.

So after I yelled for a while, I figure I will try to figure out how to put those two in RAID 0 and then work on reloading the system. Well, my motherboard doesn’t do RAID on the NVME drives. I don’t know if this is a bad thing, but I had to resort to do a “Storage Space” in windows. Supposed to be the same as RAID 0 but I don’t know. But so far, it seems to be working really well.

But then I had to reload windows completely, setup chia, configure Storage Space, setup Swar’s and get plotting. What a night. Something I thought would take 30 minutes took 3 or 4 hours.

2 Likes

Hahaha. Oh man, I have to laugh. I don’t plot on Windows but I can’t count the number of experiments/changes that were only going to “take me a few minutes!” :rofl: Been there!

Software RAID is generally considered better than “fakeraid” aka the BIOS level RAID offered by motherboard vendors.

So really you did yourself a favor here IMO!

1 Like

I briefly had a thought shoot through my head when it happened “maybe I should just load linux”. But I knew that would probably add another couple hours (because of me, not linux). I already had the media to load windows. It wasn’t bad.

1 Like

That sucks. I feel your pain. Monday I tried to change the primary drive on my machine from an nvme to a ssd. Also want to move some of my cards around to get ready for when I a hyper m.2 card. Well, it didn’t go well and from some reason Linux didn’t recognize the WiFi card. I had a 50ft Ethernet cable running from my downstairs rack to my upstairs computer. Still didn’t work and after more research the whole thing was fucked. I stayed up until the wee hours and only got a few hours of sleep. Tuesday morning I canceled all my meetings and spent the day trying to get it all set back up. Had to get a usb WiFi from micro center. Huge pain :weary:

1 Like

When it happened to the first one, I thought “I must have done something in the BIOS that caused this”. So I jumped to the second one so I could at least get one of them back online quickly. But nope, it did the same thing to the second one. It wasn’t me. So glad I didn’t really store anything on those systems.

And now I realize I forgot to turn off sleep mode. :rage:

But surprisingly, I brought the systems back up and the plots just continued. Yay! :partying_face:

2 Likes

Why not keep the NVMe drives separate, and start half your plots on one, the the other half on the other?

I think you get the same result. When they are in RAID 0, you are combining their I/O ability. So basically you get double the I/O and space. I think it works out the same but in RAID, it is just more open. At least that is my opinion. I’m sure someone with more experience/knowledge can fill us in on more specifics.

At least on Linux, with well configured NVMe namespaces, properly aligned partitions, and a kernel config (interrupts) that supports your setup well, a RAID0 with 3+ NVMe devs screams. It’s also a much easier surface to manage parallel plotting against. …at least in my opinion.

This is documented

1 Like

Hmm…has anyone done comparisons of 2 drives in raid 0 array vs 2 temp drives?

I have a 980pro and a 970 evo plus 2 TB nvme drive and I wonder how they would perform in a raid 0.

I would think the performance of the storage space would be far behind just using those 2 as separate drives as storage spaces for sequential read/writes are near identical to just a single drive of which you could have 2 operating fully in parallel.

https://www.michaelstechtips.com/windows-storage-spaces-performance-simple-vs-mirrored-vs-parity/

That article says

The Simple Storage Space was faster in random reads and writes but the sequential performance wasn’t any better than a single drive.

So, I think I’m good.

I was reasoning since Chia is mainly sequential read/write, you are missing out on half of the potential performance as your 2 drives together are now just getting the performance of a single one of your drives.

I don’t think that is the way it works, but I could be wrong.

Also make sure TRIM is correctly passed to each of the drives. Try

Optimize-Volume -DriveLetter <your drive letter here> -ReTrim –Verbose

In a powershell

Interesting. I did check that trim was enabled after I set this up. Do you know what this command does?

I just looked through the description of the command and that variable. It doesn’t turn anything on (because like I mentioned, trim is already enabled), this command just resends the trim command to that drive and retrims it. Basically running a cleanup. But if trim was already enabled, it doesn’t really do anything. At least that is my understanding.

Enabled just means it can be run, not that is does. In any type of raid situation (soft or hard) it still needs to be confirmad that the command is actually passed by the controller to the drives. This command is scheduled by default once a week (which I doubt is enough for Chia, but I just asked that in a separate topic in the Plotting section).