HP DL380 G9 , E5-2667v3, 32GHz, 256GB DDR4 2166 RAM: 45 minutes per plot?

Hi everyone,

was hoping somebody could help me with some advice here… I’ve been plotting on a wide array of hardware since May and recently moved to RAM plotting to reduce my cost of replacing the NVMEs as they wear out…

However, when plotting with the above Hardware, I was a bit disappointed to see average plotting speeds with madmax entirely into RAM be as low as 45 minutes…

In comparison, I have a desktop PC with a recent AMD Ryzen 12 core CPU which does this in 28 minutes — plotting entirely on the NVME… I was thinking that with 32 cores at 3.2GHz and 256GB DDR4 RAM I should be getting closer to the 30 minutes… Was I wrong?

Below some detail on my setup… Note: the 45 minute is net plotting time, excluding moving the plots to their permanent storage…
Also, I had to add some SWAP storage (on the NVME of the host OS) as madmax requires the full 256GB it seems. I’m ordering more RAM to get to 384GB in the hope this will make the SWAP storage superfluous…

What do you think? Thanks everyone, hope this can help others who are contemplating plotting RAM only… It feels good to know no hardware is being wasted… but it needs to deliver on reasonable plots per day otherwise it’s useless…


Processor	Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz
Memory	263997MB (9627MB used)
Machine Type	Rack Mount Chassis
Operating System	Ubuntu 20.04.3 LTS
Date/Time	Sa 09 Okt 2021 10:11:48 CEST

MemTotal	Total Memory	263997944 KiB
MemFree	Free Memory	30191796 KiB
MemAvailable		94611324 KiB
Buffers		28844 KiB
Cached		224073344 KiB
SwapCached	Cached Swap	364720 KiB
Active		25979768 KiB
Inactive		205079880 KiB
Active(anon)		24935456 KiB
Inactive(anon)		141604696 KiB
Active(file)		1044312 KiB
Inactive(file)		63475184 KiB
Unevictable		8124 KiB
Mlocked		8 KiB
SwapTotal	Virtual Memory	26165236 KiB
SwapFree	Free Virtual Memory	828 KiB

Hi, have you check the health of your server ram modules? some old servers has had hard work 24*7 and the ECC ram can be damaged.

Please take a look in /var/log/syslog to find this type of message:
Aug 14 21:00:36 umm kernel: EDAC MC0: CE row 0, channel 0, label “”: (Branch=0 DRAM-Bank=2 RDWR=Read RAS=3505 CAS=4, CE Err=0x2000 (Correctable Non-Mirrored Demand Data ECC))

If you find It, some of the modules is working bad and It can drop the performace.

Have you thought to do the “hack turbo boost unlook hack” to your motherboard? If your motherboard support It you can work with all cores in the max frec of 3.6 when MamdMax need It.


Thanks for the reply. The modules are all in good condition. I checked the syslog for the statement you mentioned above. Regarding unlocking the turbo boost on the server I need to check. From what I can see all CPUs are running at the proclaimed speeds… Will get back to you!

PS: I checked and the turbo mode is on…

Can we see the MM command without keys of course.

./build/chia_plot -n -1 -r 32 -u 32 -t /mnt/ram/ -d /mnt/firecuda/ and so on…

Yes, I would expect faster plots. I manage around 33 minutes (2000 seconds) with a Dell R620 E5-2670 dual and 384GB of ram although I’d suggest not buying a lot more ram (unless you plan to have a second server), it’s just as fast when temp 1 is a RAID 0 array (2x15k sas drives) or SSD (far fewer writes to temp 1 than temp 2).

I am also using Ubuntu 20.04.

I have the following hardware:-

HP Z840 Workstation 44Cores/88Threads, 2x Xeon E5-2699CV4, 128GB RAM.

I am getting sub-40m plots.

chia_plot.exe -n 1 -r 32 -u 256 -v 128 -t F:\ -2 R:\ -d J:\ -c xxxx -f xxxx

F: = NVMe.
R: = 110GB RAM disk.
J: = external HDD.

thanks… I see you use Windows… I should definitely be at least as fast with 100% RAM plotting…

thanks! I have 12 SATA drives for storage in my server, so prefer to plot to RAM… Will definitely expand so I can use the machine for more than just plotting (1-2 lightweight VMs).

Thanks for sharing your performance, that encourages me to keep looking where I can tweak… I thought my CPUs were quite fast…

These are my times (32 threads)

./build/chia_plot -n -1 -r 32 -u 32 -t /mnt/ram/ -d /mnt/firecuda/ and so on…

[P1] Table 1 took 12.1977 sec
[P1] Table 2 took 210.47 sec, found 4294916451 matches
[P1] Table 3 took 241.178 sec, found 4294853139 matches
[P1] Table 4 took 309.595 sec, found 4294695888 matches
[P1] Table 5 took 300.632 sec, found 4294420241 matches
[P1] Table 6 took 290.361 sec, found 4293919309 matches
[P1] Table 7 took 185.645 sec, found 4292844854 matches
Phase 1 took 1550.09 sec
[P2] max_table_size = 4294967296
[P2] Table 7 scan took 8.28351 sec
[P2] Table 7 rewrite took 33.4973 sec, dropped 0 entries (0 %)
[P2] Table 6 scan took 26.0768 sec
[P2] Table 6 rewrite took 37.6364 sec, dropped 581432771 entries (13.5408 %)
[P2] Table 5 scan took 24.6988 sec
[P2] Table 5 rewrite took 35.2555 sec, dropped 762108663 entries (17.7465 %)
[P2] Table 4 scan took 25.2406 sec
[P2] Table 4 rewrite took 36.8337 sec, dropped 828953253 entries (19.3018 %)
[P2] Table 3 scan took 25.1743 sec
[P2] Table 3 rewrite took 37.3504 sec, dropped 855130291 entries (19.9106 %)
[P2] Table 2 scan took 39.4432 sec
[P2] Table 2 rewrite took 37.303 sec, dropped 865600835 entries (20.1541 %)
Phase 2 took 390.863 sec
Wrote plot header with 252 bytes
[P3-1] Table 2 took 47.5597 sec, wrote 3429315616 right entries
[P3-2] Table 2 took 30.3372 sec, wrote 3429315616 left entries, 3429315616 final
[P3-1] Table 3 took 48.0487 sec, wrote 3439722848 right entries
[P3-2] Table 3 took 31.4939 sec, wrote 3439722848 left entries, 3439722848 final
[P3-1] Table 4 took 50.0999 sec, wrote 3465742635 right entries
[P3-2] Table 4 took 36.1116 sec, wrote 3465742635 left entries, 3465742635 final
[P3-1] Table 5 took 49.7951 sec, wrote 3532311578 right entries
[P3-2] Table 5 took 33.9349 sec, wrote 3532311578 left entries, 3532311578 final
[P3-1] Table 6 took 52.4576 sec, wrote 3712486538 right entries
[P3-2] Table 6 took 37.1014 sec, wrote 3712486538 left entries, 3712486538 final
[P3-1] Table 7 took 227.672 sec, wrote 4292844854 right entries
[P3-2] Table 7 took 41.4023 sec, wrote 4292844854 left entries, 4292844854 final
Phase 3 took 694.77 sec, wrote 21872424069 entries to final plot
[P4] Starting to write C1 and C3 tables
[P4] Finished writing C1 and C3 tables
[P4] Writing C2 table
[P4] Finished writing C2 table
Phase 4 took 79.5793 sec, final plot size is 108806902317 bytes
Total plot creation time was 2715.4 sec (45.2567 min)

Found the solution … I had set up the wrong amount of buckets… I though with 32 threads, 32 buckets would be the optimal IO setup but gees I was wrong…

I am now down to 30 minutes. That was what I wanted… 100% RAM plotting and 48 plots per day…

./build/chia_plot -n -1 -r 32 -u 256 -v 128 -t /mnt/ram/ -d /mnt/firecuda/

[P1] Table 1 took 13.7872 sec
[P1] Table 2 took 129.036 sec, found 4294961434 matches
[P1] Table 3 took 146.099 sec, found 4295006684 matches
[P1] Table 4 took 164.555 sec, found 4295039781 matches
[P1] Table 5 took 162.917 sec, found 4295037669 matches
[P1] Table 6 took 163.611 sec, found 4294973450 matches
[P1] Table 7 took 131.805 sec, found 4295050630 matches
Phase 1 took 911.835 sec
[P2] max_table_size = 4295050630
[P2] Table 7 scan took 8.99246 sec
[P2] Table 7 rewrite took 35.8792 sec, dropped 0 entries (0 %)
[P2] Table 6 scan took 27.481 sec
[P2] Table 6 rewrite took 39.1512 sec, dropped 581230604 entries (13.5328 %)
[P2] Table 5 scan took 26.1229 sec
[P2] Table 5 rewrite took 37.7466 sec, dropped 761996992 entries (17.7413 %)
[P2] Table 4 scan took 25.9356 sec
[P2] Table 4 rewrite took 41.1199 sec, dropped 828874221 entries (19.2984 %)
[P2] Table 3 scan took 25.7625 sec
[P2] Table 3 rewrite took 36.2703 sec, dropped 855048857 entries (19.908 %)
[P2] Table 2 scan took 24.777 sec
[P2] Table 2 rewrite took 34.7697 sec, dropped 865571497 entries (20.1532 %)
Phase 2 took 388.932 sec
Wrote plot header with 252 bytes
[P3-1] Table 2 took 39.164 sec, wrote 3429389937 right entries
[P3-2] Table 2 took 28.4022 sec, wrote 3429389937 left entries, 3429389937 final
[P3-1] Table 3 took 40.9216 sec, wrote 3439957827 right entries
[P3-2] Table 3 took 28.1522 sec, wrote 3439957827 left entries, 3439957827 final
[P3-1] Table 4 took 41.6737 sec, wrote 3466165560 right entries
[P3-2] Table 4 took 30.2104 sec, wrote 3466165560 left entries, 3466165560 final
[P3-1] Table 5 took 42.1499 sec, wrote 3533040677 right entries
[P3-2] Table 5 took 28.9706 sec, wrote 3533040677 left entries, 3533040677 final
[P3-1] Table 6 took 43.8801 sec, wrote 3713742846 right entries
[P3-2] Table 6 took 30.3413 sec, wrote 3713742846 left entries, 3713742846 final
[P3-1] Table 7 took 37.9107 sec, wrote 4295050630 right entries
[P3-2] Table 7 took 35.7428 sec, wrote 4294967296 left entries, 4294967296 final
Phase 3 took 435.388 sec, wrote 21877264143 entries to final plot
[P4] Starting to write C1 and C3 tables
[P4] Finished writing C1 and C3 tables
[P4] Writing C2 table
[P4] Finished writing C2 table
Phase 4 took 72.4827 sec, final plot size is 108835965950 bytes
Total plot creation time was 1808.8 sec (30.1467 min)

Just out of curiosity, how big are you making your ramdisk when you mount it? I haven’t been able to figure out how people are running Madmax on machines with 256 Gigs of ram. I have the mentioned and OOM kills my Madmax process every time!

I’ve never had my plotter killed, whether a 110G ramdisk on a 128G system, or two 110G/one 220G on a 256G server. I’m running Ubuntu 20.04.2 LTS server and nothing else of note on the machines.

How are you plotting RAM only on a 220G ramdisk? This isn’t enough RAM is it?

Madmax requires a 110GB ramdisk. It does not plot 100% in RAM, that’s Bladebit and it requires more like 416GB.

From https://github.com/madMAx43v3r/chia-plotter

From https://github.com/Chia-Network/bladebit

You could create 330+ GB of RAMdisk on a 384GB system and use madmax entirely in RAM, but most people who use madmax do not do that from what I’ve seen.

I think you might be a little confused about what we’re talking about. I know what Bladebit is. We were speaking about plotting with madmax completely in RAM in this thread. I’ve been having trouble with OOM killing my machine when plotting with madmax completely in RAM. It requires a minimum 248GB ramdisk to plot to one temp ramdisk with madmax based on my testing anyways. My machine will only support 256 gigs of RAM at 1600 MHz. That only leaves me like 4 GiB for system processes. Having trouble with oom killing madmax when the RAM gets this low in this scenario. That’s what I was talking about above. Trying to figure this problem out.

Fair enough, it wasn’t clear to me from the original post.

Yeah. I probably should have been a little more clear with what I was trying to ask. Sorry. By the way, I never have issues running a 110GiB temp w/ an NVME as temp1. Works beautifully, just trying to figure out a way to save my NVMEs. I’m at 256 TiB, on my way to 1 PiB. Got a lot of plotting left to do lol.

I set it to 250G. 2Gb for ubuntu with a little swap seems to work

1 Like

Why don’t you add some SWAP space? That solved it for my case.