Mad Max Plotter, some test results, some general info

It will my buddy went from 20 to about 27 plots on 9700k, I went from 25-27 to 40+ he is cpu/ram bottlenecked at this point on 9700k remember 9900k is about 40% more processing power than 9700k

Wow… So with both of my Setups combined i should easily get to 50+ Plots per Day?
Sounds Nuts to me

Take the fast for Temp 2, but if you are going to use Mad Max, Id rather use Optane drives.
If I only knew this was going to be better and faster I would have purchased 128GB of RAM or 2x Optane 900p 280GB, it has much higher endurance and cost about the same as one 2TB Gen.4 SSD.
I’m even trying to sell 2 of 4 SSDs

I just bought the WD Black AN1500 for around 500€, so this is definatly going to be in my setup =)

With a 5800X can do almost 45, you probably can do it

1 Like

Those times suck for your setup. Something is wrong there unless you’re running windows.

I’m in win10 but very stripped down, clean build. Still I’m assuming I’m leaving 5 to 10% on the table vs Linux.

Another strange observation. I swapped out my 64gb DDR4 3200 for what should be 128gb ddr4 3600…xmp shows proper 3600 profile but my tomahawk constantly gives the OC fail message when I try turning on xmp, so it kept falling back to 2400. I spent some time tonight trying manually to tune the voltage, clockspeed and timings but no success yet.

Anyway, I went ahead and tried the ram drive approach for my temp2 and kept my nvme gen3 for my temp1. Even with 2400mhz i figured the ram disk would be much faster than using another gen3 nvme ssd for temp2, but it was actually slower. Major bummer, too late for any more testing tonight.

10%? You’re doing 36 plots on a 5950x. I’ve got a 5900x doing 56 plots with 2 drives. My 5950x has one more drive which i’m not even sure is relevant and it’s doing 65 plots. 10% my ass haha.

Going to 256 buckts does that wear out the nvme quicker?

1 Like

65 plots on 5950x sounds really weak tbh don’t know how most of you guys can’t get it to work right especially with gen4 setups. I know someone with 5950x on swars was getting 90+ Parallel and mad max he did couple plots in under 800sec

I’m using windows 10 madmax plotter dual xeon 2697v3 28 core 56 threads .
2x nvme raid0 for temp1 and temp2 drive , the result is not so good .

for 128 bucket averaging about 60minutes
for 256 bucket averaging about 51 minutes.

I watch closely what could be the bottle neck would it be my windows 10 or my nvme drive .
when entering phase 3 the cpu only reach about 15-20% .

Any idea why is that ?

1 Like

Just tried it on my 3970x with 128GB ram
110GB ramdisk as temp2 and 2xPM983 in raid 0 as temp 1
Total plot creation time was 963.89 sec on first run.
Maybe sub 900 is possible with tuning and faster temp1 drives…
Awesome tool, many thanks for the contribution :slight_smile:

Do you think 2x corsair MP600 core PCIE 4 in RAID 0 would perform similarly with your system? Faster/slower? Cheers

Could you please specify what exact model of ram you use, the motherboard model as well as the OS you got these results on? I’m interested to build a similar system. Cheers

Would ECC RAM be better for ramdisk versus NON-ECC?
I’m about to order 128GB and the price difference is not too much between Kingston 32GB 2666MHz ECC sticks versus Corsairs 32GB 3200MHz NON-ECC sticks.

The ECC is CL19 though whereas the NON-ECC is CL16.

For dual cpu systems I would advice running 2 separate instances, each affinity bound to one of the cpus.

How would you limit each process of Mad max to a single cpu in Linux?

With one 2697v3 using 14 Cores, 128 Buckets and 2 970 EVO Plus as Temp drives i get around 47 minutes. so i think a 2CPU system should be faster, if the nvme’s are not the limiting factor.

If the MP600 can sustain high write speed it will improve the times.
However many consumer drives can’t sustain the writes speeds you see on spec sheets.

For Windows:

cmd.exe /c start "Program Name" /affinity <COREMASK> "Full path of application file"

where <COREMASK> is the affinity mask in Hex.

e.g., if your machine has 2 CPU’s with 16 threads each, CPU0=CPU15 for the first processor and CPU16-CPU31for the second, then your mask for the first processor would be ‘00000000000000001111111111111111’ (0x0000FFFF), and your mask for the second processor would be ‘11111111111111110000000000000000’ (0xFFFF0000)

For Linux:

Pretty much the same thing.

taskset <COREMASK> <EXECUTABLE>

2 Likes