Had to reload my two plotters

When it happened to the first one, I thought “I must have done something in the BIOS that caused this”. So I jumped to the second one so I could at least get one of them back online quickly. But nope, it did the same thing to the second one. It wasn’t me. So glad I didn’t really store anything on those systems.

And now I realize I forgot to turn off sleep mode. :rage:

But surprisingly, I brought the systems back up and the plots just continued. Yay! :partying_face:

2 Likes

Why not keep the NVMe drives separate, and start half your plots on one, the the other half on the other?

I think you get the same result. When they are in RAID 0, you are combining their I/O ability. So basically you get double the I/O and space. I think it works out the same but in RAID, it is just more open. At least that is my opinion. I’m sure someone with more experience/knowledge can fill us in on more specifics.

At least on Linux, with well configured NVMe namespaces, properly aligned partitions, and a kernel config (interrupts) that supports your setup well, a RAID0 with 3+ NVMe devs screams. It’s also a much easier surface to manage parallel plotting against. …at least in my opinion.

This is documented

1 Like

Hmm…has anyone done comparisons of 2 drives in raid 0 array vs 2 temp drives?

I have a 980pro and a 970 evo plus 2 TB nvme drive and I wonder how they would perform in a raid 0.

I would think the performance of the storage space would be far behind just using those 2 as separate drives as storage spaces for sequential read/writes are near identical to just a single drive of which you could have 2 operating fully in parallel.

https://www.michaelstechtips.com/windows-storage-spaces-performance-simple-vs-mirrored-vs-parity/

That article says

The Simple Storage Space was faster in random reads and writes but the sequential performance wasn’t any better than a single drive.

So, I think I’m good.

I was reasoning since Chia is mainly sequential read/write, you are missing out on half of the potential performance as your 2 drives together are now just getting the performance of a single one of your drives.

I don’t think that is the way it works, but I could be wrong.

Also make sure TRIM is correctly passed to each of the drives. Try

Optimize-Volume -DriveLetter <your drive letter here> -ReTrim –Verbose

In a powershell

Interesting. I did check that trim was enabled after I set this up. Do you know what this command does?

I just looked through the description of the command and that variable. It doesn’t turn anything on (because like I mentioned, trim is already enabled), this command just resends the trim command to that drive and retrims it. Basically running a cleanup. But if trim was already enabled, it doesn’t really do anything. At least that is my understanding.

Enabled just means it can be run, not that is does. In any type of raid situation (soft or hard) it still needs to be confirmad that the command is actually passed by the controller to the drives. This command is scheduled by default once a week (which I doubt is enough for Chia, but I just asked that in a separate topic in the Plotting section).

That is not my understanding. I was under the impression that when trim is enabled, it actively trims data that is no longer active (been deleted). But you can ALSO schedule a trim job to run through and check things out. But this is over my head. So if you find more information, I’m interested.

As I mentioned, I don’t think this is the way this works, but I was just thinking about my situation and it doesn’t matter. Currently with the in a Storage Space, my bottleneck is the processor. So basically it is writing as fast as it gets data now. So even if I found a way to speed up my I/O, on this machine, it wouldn’t matter. I am interested to know if what you are saying is fact, but it won’t help me at the moment.

I am trying my best to get my head around this. So I watched a video about it. See the video below and if you want to skip all the technical stuff (which is really good) you can go directly to timestamp 18:20. It says that windows handle this by default and when a file is deleted, the trim command is sent automatically. He also mentions that playing around with the optimizations is good, but he doesn’t go into why/how.

Here’s the thing I don’t understand. If the OS sends the right TRIM instructions after every delete, then why on earth would we ever have to schedule a TRIM, and what are all those blocks it is releasing then every time I run it manually?

Don’t know. Would love to know. In my mind it sounds like the way defrag used to work. The system automatically would do a pretty good job of keeping the blocks organized. But if you wanted to run a manual defrag it would do a better overall job. But I just don’t know. What I do know is that I ran that command (the optimize command) you posted on one of my systems and plotting basically froze for about 30 minutes. Odd.