MadMax Plotter high RAM usage?

I’m running MadMax plotter on an older dual Xeon server (24 x Intel(R) Xeon(R) CPU E5645 @ 2.40GHz) with 64GB of RAM. Both my temp directories are on SATA SSDs for now.

My problem is related to RAM usage on the plotting VM … the github states that the RAM requirements are about 0.5GB per thread. But when I run with the chia_plot command with DEFAULT number of threads (4), RAM usage is consistently high at ~48Gi.

Confusingly, my VM’s CPU usage is also a bit higher than expected (consistently over 4 cores despite only using the default 4-thread setting.,…).

my RAM is DDR3 if that matters… does anyone with VM experience or MadMax knowledge know what’s going on?

I don’t know what the problem is the only thing I can tell you is that I tried to run Madmax on a VM (virtualbox with Ubuntu) and it didn’t run very well at all. I didn’t check the Ram usage but the plotting speed was truly abysmal so I gave up on that project. Without the VM it runs fine.
Mind you, this is the first time i’ve used a VM so I might have just messed something up.

I assigned 20 cores to the VM, 148 GB Ram, created a ramdisk for temp2 and a virtual disk on nvme as temp2.

Madmax tends to do this, especially in phase 1 you will see all cores used despite the treads setting.

No ddr3 is fine. I have dd3 1600 and it runs fine on that without the VM

Normally the ram usage is only affected by the bucket setting

1 Like

my plotting time is about 3.5 hours right now, but i plan to add enough RAM to use it for the temp directory and see how much that helps the time. but still without that, 50Gi of RAM is a lot of overhead that i don’t understand

What CPU’s are your using?

Intel(R) Xeon(R) CPU E5645 @ 2.40GHz

there’s a topic around the forum for old xeons, you should check it out and see what kind of times people get on those cpu’s to see if it’s worth getting extra ram

good call out, i think my CPUs are just wayyy too old. they are 2010 I think. I’m already set on supplementing the RAM, so I’ll update this post with the new speeds just for fun.

I tried Ubuntu and Mad Max on my dual Xeon 12c/24t 32GB DDR3 Mac Pro 2012. After about 2 hours on 1 plot using 23 cores, I gave up. I can’t even regular plot on my Mac Pro… Too old. I know there is a bitfield disable option in GUI that is supposed to help older CPUs missing certain instruction sets, but I do not know how to set that option in Mad Max. Also, increasing RAM won’t really benefit you here if you aren’t going to try a RAM Disc option for temp -2.

As for VMs, DO NOT set up a plotter on a VM. It is better to just plot directly into the main OS.

thanks for the reply! yeah to be clear, i’m adding RAM specifically to use for the temp directory, instead of the current SATA SSD.

As for VMs, DO NOT set up a plotter on a VM. It is better to just plot directly into the main OS.

Why?? do you have a 3rd anecdotal report of bad performance to add ? :stuck_out_tongue: or even better, do you know why it happens?

I don’t have experience with plotting in a VM. That is because I know that hardware allocation between a host OS and the VM can be finicky. It is always slower for an OS to pass storage and RAM to a VM and thus storage and RAM inside the VM is slower.

I should have specifically said for you not to try and plot on a VM with your CPU setup. Just too old. I have my Mac Pro and I have tried VMs in both Ubuntu and Mac OS on it running Ubuntu and Windows and they just don’t perform the way I would expect an identical system that is not a VM to perform. My dad has the HP version of what is essentially the 2012 Mac Pro 12c/24t and even it is not good for VMs.

1 Like

If you want to run MadMax in a VM for some reason, maybe you can try to run it in a docker instead. On a Linux host, it should be around 5% slower than running natively.

If my plotter is also my main driver, I would also run MadMax in a docker.

update on my rig!

i am convinced that MadMax uses more than “0.5GiB RAM per thread” as it claims on the github. i now have ~189GiB of RAM in my server, and have allocated 110G for the ramDisk. this works great until the occasional OOM which kills the plotter and usually the chia services as well. this can happen anywhere from the 2nd plot to the 9th.

for this to happen, my 24-thread madmax job would have to be consuming at least 79GiB of RAM at certain points during the plotting process. I’ve tried running the madmax job with a RAM limit, and this results in only the plotter dying (the farmer stays alive).

  • Is it possible that the chia daemon or farmer is having a spike in RAM usage?
  • is the info above enough to confirm that Madmax is causing the OOM?

any suggestions?

It’s 0.5gb at 256 buckets, try increasing the buckets

that’s the default, but i can try setting buckets=256 explicitly

He meant that you should try setting a higher number of buckets, i.e. 512, (or even 1024, though I haven’t heard of it benefiting anyone in any significant way).