Mad Max Plotter, some test results, some general info

Not for madmax. It was original plotter

Need some help on the Madmax plotter. My setup is a 5950x with 128G Ram (2666, not the fastest). setting 110G as ramdisk on Ubuntu. Using a seagate firecuda 520 as a temp drive, u 512, tried all different cores i.e. from 16 - 32, but the madmax plotter crashes either on P1 or P3.
Any help is appreciated.

Yes but should not be more than this as for p1 and p2 for mad max is ok 220GB disk

Is your ramdrive cleared of existing temp files/plots?

did you also try different bucket sizes?

Also, might want to run memory test, my earlier crashes turned out to be faulty memory

I unmount the ram drive after every crash, which clears the RAM drive. About the memory test I did one 3-4 days back. can’t remember what command it was. I will do it again, as I am thinking of doing it again. What the best way to do it on Linux.

NVMe is only a protocol used for ssds on top of PCIe…
Put any NVMe capable SSD into your PCIe slot and it should work.
If your BIOS/EFI doesn’t support NVMe, you can not boot from such a device.
You only need something like the following to put M.2 into you normal PCIe slot:

https://www.amazon.com/s?k=pcie+m.2

Anyone here experience with running Madmax in a VM?

I tried to make a VM with Virtualbox (host and guest both Ubuntu 20.04)
Assigned 16-20 CPU’s (machine has 20 cores, 40 threads)
148 GB Ram
created a 110GB ramdisk for temp2
virtual disk on the nvme for temp2
Added destination drive as shared folder

But the plotting speed is truly dismal. I don’t know the actual total time because first tables of phase one already took so long that I quit :sweat_smile:

It looks like it can only support real cores and not hyperthreading?

Anyone have good results running Madmax inside VM?

I don’t try that but I can guess it would be super slow. I recommend using Linux as a native OS or at least use windows 10 Subsystem for Linux.

Adding an extra 32G RAM is not a good idea, I think. It can cause unbalance ram allocation between CPUs and even reduce the ram clock speed. I’ve tested this on my dl360p g8 hp. FYI It’s possible to plot with only 256G ram using ubuntu and disabling memory swap space.

@juppin
thank you for your reply. I was asking about card for Dell PowerEdge R820 server. For personal PC I can get adapter for PCI express. Does anybody know what card do I need to connect to NVMe enclosure?

Thank you for your reply. I have 2 slots available for each CPU, and 4 slots in the middle. If I add 16 GB of ram to each CPU slot would it help to avoid unbalance ram allocation? Currently I have 16 x 16GB Ram modules installed.

if you already have 24 x 16G ram module on your system, well 384G ram is more than enough in every way!
But if you have 256G (16 x 16G) ram, that works too with Linux. Adding an extra 16G module to every CPU, depending on your machine, could help a bit or decrease the memory clock. check the datasheet for your sever or if you have the memories, give it a try. Install memories respectively and check mem speed.

Just search for a thunderbolt pci card. It will pop up.

I’ve been using 128 buckets for phase 1 because I only have a 480GB SSD as a temp drive and thought that’s the maximum you could use at that size. Am I wrong about that?

Thanks.

Do you mean the U.2 format?

The buckets only change the RAM usage, not the temp size.

the default 256 uses about 0.5 GB RAM per thread at most.

Thanks. Is there a benefit in plot time with more buckets or just less RAM usage?

(EDIT: Meant to reply to Voodoo)

depends, some people have better times by tweaking the bucket size, I didn’t get much different result myself. I guess its very system specific if and how much it matters

Okay thanks a lot. I’ll interrupt the plotting tomorrow and try a plot with 256 for phase 1 and see if it makes any difference.