Is my Mad Max slow?

HI! I have just recently gotten into Mad Max plotting on Windows. have an AMD R5-2600 + 16GB RAM and 1TB (2x 512 Raid) NVMe and my storage HDD. My Windows is on another 512GB NVMe. I can typically get about 5 or 6 plots every 7 or 8 hours if I plot parallel to my NVMe RAID and store locally to the HDD.

On Mad Max, I have it set for -r 11 -u 256 -v 128. I only do 11 threads out of 12 (6 cores) because I am under the assumption through testing that doing all 12 cores pegs the system at 100% and everything tends to lock up/freeze. Using this, I was able to max out my speed at 1 plot every 1.5ish hours or 6 plots ish in 9 hours.

Any advice is appreciated. Thanks everyone.

90 mins is slow for MM. You should aim for faster plotting with that hardware. Set -r 6 and re-run. Post up results.

.\chia_plot.exe -n 2 -r 16 -u 512 -v 256 -t R:\ -2 U:\ -K 8 -d S:\

I have just started yesterday w/MadMax on Windows. I’ve tried a number of mods; buckets, threads, drives, and they all plot nearly the same, i.e., between 1775 sec to 1805 seconds (about 30 min).

I settled on the above for now with 2 nvme (980Pro 1tb, Silicon Power 1tb, both PCI-E 4.0 on ThreadRipper Pro 3955WRX). Even though I have 16 core/32 threads, spec’ing 32 vs 16 threads had no effect on times, surprisingly. I plan on trying a memory cache next.

As to yours being slow, what nvmes do you have? For an R5-2600 maybe times are not so bad, but could get better w/mods.

I have 2x 512 Silicom Power in a 1TB RAID. My boot NVMe is an Adata 512. My thing is: Do I need more RAM to do this? I am not sure 16GB is good enough.

On my platform, short of going with an R9 (BIOS update needed), the max RAM for the R3/R5/R7 is 4x 16GB sticks or 64GB RAM. I don’t think for casual gaming (what I use the system for when it isn’t plotting) that anything above 16GB is needed.

The SP US30 is the nvme I have. I’d read reviews that earlier nvmes of theirs did not have a good write cache. Even this was not the best (according to this review > Silicon Power US70 M.2 NVMe SSD Review: The Ultra-Value M.2 Stick | Tom's Hardware), but it seems decent enough and roughly similar to the 980 Pro, which is pretty good as well.

As to RAM, for MM you don’t need much at all, so you shouldn’t need to upgrade that. My TR is showing only ~10GB total used when plotting. For regular GUI plotting, more memory is useful for parallel plotting, however.

Udpate: As I watch an MM process, total sys varies to ~24GB at times. At least w/my config setting, 16gb is not enough, so for you more mem might help.

Yeah. I think I should be getting about an hour max per plot. It could be my NVMe drives going bad (as in they aren’t aging gracefully). I did a -r 6 as my 1st test per recommendations I saw online and I got around 2 hours. I tried a -r 12 (all threads) and got somewhere between 90 mins and 2 hours. That is why I settled on -r 11.

Your drive in that review has roughly 1/4 of the sustained write performance of the 980 Pro which is also shown in the same review; nowhere near similar to the 980 Pro for chia plotting. I assume the Adata drive the OP has is in the same boat which is also paired with an older Ryzen chip in the 2600.

Also if you guys are running Windows it’s going to be even slower.

Under real world conditions of MadMax plotting, I observe them to perform similarly. The review graph shows that while the Sam starts writing at a 5.2 GBps rate, @ only ~25GB if falls off to a sustained ~2GBps. The SG, on the other hand, initially writes only @~4.2 GBps, but last about X7 longer at 4.2 GBps completing 175GBs before it’s cache depletion.

To test actual performance in MadMax, I use the “-G true” parameter and the times are identical with the regular switch of drives between -d and -2.

Regardless, 1775 seconds in Windows is credible enough and I hope for even faster results w/mem disk.

1 Like

Try with out hyperthreading. Can leave hyper threading on but do r 6. Also could try plotting on write performance nvme something like intel optane or intel dc 4600

Prolly. Windows can have some pretty funky issues with it. Recently did a video and by switching from Server 2016 to server 2022, got a over 15 min increase in plot times. Everything else was the same. Then some other ppl in comments started trying it out and guess what, win10 to server 2022, shaved 10 mins off one dudes plot times. Not 100% on what is going on here, but I think it is NUMA related and the OS.

1 Like

Dual Xeon E5-2670 (2x10 core - total 40 threads), 128GB DDR3 ECC RAM (110GB RAMDISK), 2TB Seagate nvme
-t - nvMe drive
-2 RAMDISK
-r 40
-u 256
-v 128
Around 100 min. for one plot. I think is to slow. Sugestions?

1 Like

Windows?
Are you using a HP z840? I have almost the same setup.
2x Intel Xeon E5-2699 V4 (22C/44T, 2.2/2.4GHz)
128GB RAM DDR4 ECC
Samsung 970 EVO Plus 2TB
-t - nvMe drive
-2 RAMDISK
-r 32
-u 256
-v 128

Drop your -r down to 16. I have 88 threads and have tried all combinations.

I create a plot in 2200s.

I have a single E5-2699 V3 (18 core/36 thread) with 128GB ram (Dell 5810)

-t - 3 x 200GB S3710 in Raid 0
-2 RAMDISK
-r 18
-u 256
-v 256

Total plot creation time was 2114.41 sec (35.2402 min)

-t - 2 x 2TB Firecuda 520 Raid 0
-2 RAMDISK
-r 18
-u 256
-v 128

Total plot creation time was 1920.49 sec (32.0081 min)

I’m going to sell the NVME drives and get some money back, but wondered if I could speed things up when using the 3 x 3710 in raid 0, adding a fourth SSD drive doesn’t make much difference in CrystalMark.

I’ll have to try -r 16 and see if that makes a difference.

I’m gonna sound like typical fanboyo right here but - just move to Ubuntu/Debian. It is really not worth your time to stuck on it. Remember that “madmax plotter for windows” is just a port to Windows. Natively it is written for Linux-based OS, so obviously it will work better on it.

I was a fan of Ubuntu back when I was in high school and college. I game occasionally and the games I do play are on Steam for Windows. That is why I am sticking to Windows for now.

Not to go too far off topic, but have you tried Steam with Proton lately? I game on Ubuntu, including playing Steam games that were released as Windows-only - Proton makes many (maybe most) games just work. It’s come along massively in the last few years.

You can use ProtonDB to check whether the games you play are likely to work (wont link to it because it’s so off topic).

More on topic, I found it better not to try to use the machines I’m plotting with for anything else, then you can devote 100% of them to the task, and makes it less likely you’ll either crash the PC or accidentally reboot it (which happened more times than I’d care to admit).

I get 3.4k second (56 minute) plots on my lowest spec plotter. Dual Xeon X5670, hyperthreading enabled, 12 cores, 24 threads, 24GB of RAM, running with -r 24 -u 256, no RAM disk, tmp is a Sabrent 1TB NVME (an old gen 3 one, in a riser), I don’t expect much more from this spec.

I am waiting for them to support Easy Anti-Cheat (expected before Steam Deck releases). I am not moving to Ubuntu as my main driver before that :sweat_smile:

Would doing a -r 11 -u 256 -v 128 -k 2 help? The key here is that I added a -k 2. Thanks