The best I could get out of it running parallel plots was running 8 in parallel and getting 23 - 24 per day. At 40 minutes per plot, this should get me 36 per day. Much better.
i folks i mistakenly add Master Public key of creating plots instead of pool public key in Madmaxx plotter. Farmer key is correct
so all my plots useless?
because hpool need to sign plot via POOL Public KEY and i create plot Master Pkey
I’m using standard bios settings with the supermicro motherboard, as mentioned the CPU runs at 3.3Ghz at full load, which appears to be the standard max turbo for fully loaded cores (vs single core which can reach 3.6Ghz). It uses this speed with both the powersave and performance governors. The ram is 1600MHz Registered ECC, nothing special, and in fact is actually running at 1333Mhz.
I suspect this kind of speed - using slow HDD or slow/moderate performance SSD + 256GB of ram can only be achieved in Linux. Linux has a ramdisk implementation (tmpfs) which is basically just the internal page cache exposed as a filesystem, and can seamlessly use memory either for normal disk cache or for the ramdisk. Windows doesn’t have a native ramdisk implementation, and I suspect the 3rd party ramdisk solutions are not as fast as if MS would implement a similar type of page cache ramdisk in the tmpfs style. If you have very fast SSDs, Windows should be fine.
If you have 256GB of ram, I believe the simplest configuration would be this:
Run Linux
Ensure a decent sized swap file or swap partition on nvme - say 32-64 GB (this is just to handle a small amount of potential spillover)
Configure tmpfs to have 256GB max size
use tmpfs (ie /dev/shm) for both tmp1 and tmp2
With this pure tmpfs configuration, I got a plot time of 1449 (24m 09s)
There’s a lot of variations that can achieve similar results, but this is probably the simplest. With VM tuning, I was even able to use 15k SAS for both tmp1 and tmp2 (2 total 15k SAS) and still get a plot time of 1594s (26m 34s). This is with the following tuning:
This configuration should never be used for anything except for madmax plotting or something similarly use case specific. Likely this is even excessively aggressive but it’s what I used when testing the 15k SAS + 15k SAS.
PS. feel free to DM me on discord if you have further questions: dynafire#2992
-r, --threads arg Number of threads (default = 4)
…
Make sure to crank up <threads> if you have plenty of cores, the default is 4. Depending on the phase more threads will be launched, the setting is just a multiplier.
This is not strict directive how much threads to use, but just multiplier.
According to my observations, with -r 16, 30 threads are simultaneously working while plotting. I have not tried it, but I suspect that increasing the -r value will either not affect on performance, or it will have a negative impact.
Did you try or see someone’s report with more then 16 threads?