Do you think the NVME throughput or size would be the constraint?
If size, I have been experimenting with running more than plots on an SSD than it should theoretically be able to take (ie 10 parallel plots on a 2TB SSD, rather than 7 or 8). What matters is the average size of each plot in process, not the maximum size.
I’ve been watching ‘plotman status’ (and also df!) carefully and so far it’s working well. I would recommend higher staggering mins with this approach as s*it goes really wrong when the plotting tmp drive fills up. (don’t ask me how I know that )
Now of course I am using 2TB SSDs so with the 4TB ones they might be more IO limited. I am not sure.
Ah yes, I have heard that you can go beyond the storage size, but then its kind of risky and you need to attend to it more. I was giving the safety factor version. So, 4 plots per 1TB. Set it and forget it.
Yeah I’m still playing around with the plotman settings to get it right. I think I need to actually increase the stagger so I have less phase 1 jobs running at a time. At the moment it’s only doing 6 plots on a 2TB drive. I get impatient and reduce the stagger but that’s actually making it slower !
See above Currently doing 45 mins but I think I actually need to increase it.
Ok, now you should be able to do 75 in parallel because RAM is the next limit. If you get past that, its back to NVMes at 80. CPU is limited to the 128 that it has.
So is 90 minutes still the accepted stagger time? What is the rough formula for how many plots you want to have in Phase 1 at any time? If that has been posted somewhere, I missed it. Thx!
I actually just built a rig using the 3990x (MSI Creator TRX40 mobo, using Sabrent 4tb nvme’s, only 4x32gb ram)
You have any issues getting started with it? First testing, I’m seeing the CPU jump to 100% with 30 threads allocated (5 plots x 6threads per), so thinking there’s a setting I’m missing somewhere. You have anything similar?