Madmax plotting ram only

Try 244G and 512 buckets

1 Like

It might be because the generated plot is still in the ramdisk when the next plot starts.

Remember, the plot in the temp drive needs to be transferred out of the ramdisk to free up space, but depending on where you’re final directory is you could be running out of space before the plot is transferred.

Try adding the -w flag to madmax to tell it to wait for the transfer to complete before starting the next plot.

1 Like

If you copy the plot to a fast SSD you should be ok without the -w argument. If you have a fast CPU, you will need a fast SSD.

With a single E5-2690v2 no problem if I copy at 700MB/sec (36 min plot)
With a dual X5675 I can copy at 272MB/sec (52 min plot)

I tried -w. Still getting killed as if the os is running out of memory. Swap disabled. Oom errors in dmesg.

Dual 2690 v2 in the setup I am referring to. Plots being copied from ramdisk to 1TB nvme ssd buffer disk in a pcie slot. Sn750 1TB to be precise. Copy speeds are around 1700 MIB/sec IIRC.

i have a topic about this! MadMax Plotter high RAM usage? - #14 by just_toby

I’m getting the same behavior with 192GiB of RAM and using only 110GiB for the ram disk

i haven’t tried using the -w flag yet ! will report back with results.

i have a fast NVME ssd that i could plot to instead, but i’m not sure how i would then copy the plots to the final destination hard drive

Then its starting to sound like there isn’t enough memory left to run madmax plotter and use 248GB ramdisk as a temp drive.

If you only have 8GB left to run the system you might want to try reducing the number of threads and buckets so that it doesn’t exceed 8GB. No idea how to calculate that though…

Or you could enable swap. It might chug through slowly, but its better than failing completely. You’ll have to experiment. Maybe you can put the swap file on the NVME drive. Might just be quick enough for some decent results.


Btw, what are your thread and bucket settings in madmax?

According to the MM github page it should only use 0.5GB per thread with 256 buckets. If you reserve 2GB for system process, and have 6GB for MM, then you should be able to run it at 12 threads 256 buckets.

Increase the number of buckets will reduce de RAM.

So less threads with more buckets and swap on should work.

1 Like

You would need a script that checks your drive periodically for plots, if found, move to HDD. I think there was another topic around here I’ll see if I can dig it up.



1 Like

I’m starting to think I was way off the mark with that idea. Even if the ramdisk is full it would show up as ‘Out of disk space’ rather than ‘Out of memory’ so scratch that idea.

If he got “Killed” he was out of memory.

As uChiaFarmer said you should try with 8 threads, then go higher till you get killed.

Ok. I’ll try to tinker some when I get home. Probably gonna kill my plot times going that low on threads huh?

Yeah, I tried enabling swap. What was shipped with my default Debian install. I think it was 1 gig swap. I might try making a bigger swap partition tonight on my NVME and see if it works better. Trying to save my NVME drives by ram only plotting. I have a lot more to plot. My buckets I’ve been using are 512 and 256. I can’t remember their respective flags. About the threads, I’ve been using 38, so that might be my issue. I’m gonna see if I have another 8 or 16 GB stick I can throw in tonight and try.

1 Like

Hard to say, I run my plotter on 12 threads and getting 33min plots (Ryzen 3900x, PCIe 4.0 MP600 Pro 2TB NVME). No idea what your plotting speed is.

I’ve played around with a bunch of different older Xeon processors and MadMax. I’ve found that setting the -t count to about 2 to 6 higher than the logical cores is the sweet spot. So for example, you have dual 10-core processors for a total fo 20-cores. I would start with setting -t to 20. Then if it works try 22, 24, and 26. You might find that -t 22 works better than -t 24.

Whenever I would set -t any higher than that the performance would actually degrade. With that said, I think it’s either setting -t too high, and also possibly because it is trying to offload the created plot while starting a new plot (and not having enough disk space). I was going to try the same thing (running MadMax on all Ram with 256GB Ram) but decided against it since I assumed I would have issues starting the next plot before it finished off loading the previous plot, and I don’t want to run with -w set (since its a waste of time).

1 Like

Man, I haven’t had the time to increase threads one by one to see how many threads I could run. I have 40 of them lol. It’d probably take me a while to sit down and do it, and I haven’t really had the time. I did however setup a 12GiB swapfile on my nvme as you suggested and enabled it. Had one crash today, and restarted it. Been going fine ever since. Currently on plot number 28 with no crash again yet plotting solely to ramdisk. Probably slowing my times down a little by using the swapfile… Idk. Don’t really care right now. Don’t feel like buying more ram right now lol. Anyways, thanks for that suggestion. Got me up and running. Now I can put it to work and fill this next 40TB that I have on standby lol. Thanks man.

Quick question. How much writing do you think the system is doing to that swapfile? I don’t know dick about how swap works. Am I still racking up tons of writes to my nvme by doing it this way? Think it’s writing less to the nvme per plot by having the swapfile there and plotting to ram vs just using the ssd as the temp drive with 110GiB ramdisk?

There is no way a swapfile makes anywhere close to the amount of writes of plotting on an NVMe. That said, I don’t have any data to back that up. As long as the ramdisk is set up correctly and is being utilized as the plotting disk, swap should only fill with lower priority background/OS stuff if it is needed.

Are you specifying two separate drives (-t and -2)? Are there 2 separate ramdisks (partitions)? How much swap is your system using?

Nah. I’m only using one temp directory with one large ramdisk setup. I haven’t paid much attention to how much swap has been getting used. I’ll check it when I get a chance and let you know.