SWAR manager do not work with MADMAX plotter

Trying to fine tune my system as following:
AMD RYZEN 9 3900x 12C/24T
2 X 32GB RAM 2666
500GB SSD for Win10Pro
2TB Crucial P5 PCIe Gen3 NVMe - D:
2TB Crucial P5 PCIe Gen3 NVMe - L:
OS: Windows 10 Pro

When i try to use SWAR with madmax plotter i got the following mistake from the log file:

Multi-threaded pipelined Chia k32 plotter - 055a5db
Build 0.1.5 for Windows. Check for latest updates: Multi-threaded pipelined Chia k32 plotter | Build for Windows

Pool Public Key (for solo farming) or Pool Contract Address (for pool farming) needs to be specified via -p or -c, see chia_plot --help.

Here are my settings in SWAR:

-name: worker1
max_plots: 999
farmer_public_key: ***
pool_contract_address: ***
temporary_directory: D:
destination_directory: F:\Hard25TB\k32testswar
size: 32
bitfield: true
threads: 12
buckets: 256
memory_buffer: 20000
max_concurrent: 1
max_concurrent_with_start_early: 2
initial_delay_minutes: 0
stagger_minutes: 0
max_for_phase_1: 1
concurrency_start_early_phase: 5
concurrency_start_early_phase_delay: 0
temporary2_destination_sync: false
exclude_final_directory: true
skip_full_destinations: true
unix_process_priority: 10
windows_process_priority: 32
enable_cpu_affinity: false
cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]

  • name: worker2
    max_plots: 999
    farmer_public_key: ***
    pool_contract_address: ***
    temporary_directory: L:
    destination_directory: F:\Hard25TB\k32testswar
    size: 32
    bitfield: true
    threads: 12
    buckets: 256
    memory_buffer: 20000
    max_concurrent: 1
    max_concurrent_with_start_early: 2
    initial_delay_minutes: 0
    stagger_minutes: 0
    max_for_phase_1: 1
    concurrency_start_early_phase: 5
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: true
    skip_full_destinations: true
    unix_process_priority: 10
    windows_process_priority: 32
    enable_cpu_affinity: false
    cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]

Global settings:
chia_location: C:\Users\LuboChia\Downloads\PlotManager\chia_ploter\chia_plot.exe

backend: madmax

phase1_line_end: 20
phase2_line_end: 34
phase3_line_end: 48
phase4_line_end: 53
phase1_weight: 33.4
phase2_weight: 20.43
phase3_weight: 42.29
phase4_weight: 3.88

max_concurrent: 1000
max_for_phase_1: 2
minimum_minutes_between_jobs: 0

I suppose SWAR manager do not give pool_contract_address to MADMAX.
Can someone confirm this?


1 Like

There is really no reason to run a plot manager and Madmax at the same time. The entire purpose of Madmax is to get away from parallel plotting.

yeah for this system, true really no need for a plot manager. Only some people with really high end systems like threadrippers or newer dual xeon systems seem to be able to benefit from running 2 madmax plots at the same time.

There is some guy who made a version of Swar that works with madmax, the original swar afaik never got updated anymore to include madmax.
No idea if it is legit or reliable but you can google madmax swar, sure you’ll be able to find it that way

The original developer of swar seems to have effectively abandoned it, see: swar is MIA for 2+weeks. · Issue #1153 · swar/Swar-Chia-Plot-Manager · GitHub

There are other advantages of a plot manager besides scheduling parallel plots though, like a consistent interface to start/stop plotting, get plot status, and most importantly move plots from plotters to farmers in an intelligent way.

I’ve used plotman (not affiliated in any way) since the beginning, it supports NFT pooling plots with the madmax plotter. Others may too, but the original swar does not.

Thanks. but Plotman do not support Windows OS

I do not agree with you.
Using Madmax i make a plot for 40min, but i use only 256 gb SSD space. I have bought 4 TB, with the purpose to use SWAR and many plots at the same time.
In addition, also the 64GB RAM is only at 15% during plot creation.

Please take a look at this one:

I just want to use my resources. Otherwise i have to sale them.

1 Like

yes that’s the tutorial I was talking about.

You might be able to squeeze a bit more out of it that way but like WolfGT said, madmax is designed to make full use of the CPU running just one plot.

One guy in that thread says this:

to give you an idea of my results:
-chia basic: 15plots / day
-swar alone (with lot of optimisation): 20
-madmax alone ( with lot of optimisation): 35
-madmaw on swar (with basic optimisation): 45

That looks impressive, but I have a 3900x myself and get about 47 plots/day just running a single madmax plot on 2x nvme, 32GB 3200mhz.

It’s fun to try and get better results, just be ready that it’s not going to make a ton of difference. In any case I highly doubt you will be able to effectively utilize 4TB of temp space.

But If you prove me wrong, please post it here :innocent:

Anyway i will wait a little to see if someone will come up with a madmax running on many instances.If anything found i will post here.
If this do not happens, i may sell both the ssd and will bye 1TB 980pro SSD or faster SSD and additional 2x32gb RAM, so to have a ramdrive as temp2 in Madmax.
Do you think such an upgrade makes sence, and will i get <30min for a plot with Madmax?

with ramdisk, you maybe reach under 30 minutes, but only in Linux, not in Windows,
Again, I have to say ,windows ramdisk sucks…

My testing result: Xeron E2670V2 *2, 256G full memery disk in linux is around 28minutes.
same setup in Windows is over 46-50 minutes…

I have to disagree, my Ryzen 3900X can do a plot in 28 minutes(51 a day) using SSDs, on Windows. I’m sure a ram drive would be faster, as I tried Crystal Mark on a 10GB ram drive, and it scored over 12,000MB/s, but I only have 32GB

I have a Dell 5810 with a 18 core CPU, and 128GB ram, using a ram drive on Windows that will do a plot in 32 minutes, so 45 a day.

Linux would be faster, but I’m not familiar with Linux, so I’ve not spent the time learning.

But what are your Madmax settings?
Mine are:
chia_plot -n -1 -r 24 -u 256 -v 128 -K 2 -t D:\ -d F:\ -f *** -c ***
Can i tune something to achieve better time?

actually I do tested on my 5900X as well with NVME and 128G DDR4 ramdisk in Windows ,all end up around same time like 29-32 minutes… not much improvement as we thought with 128G ram as temp2 in windows…
But in linux, ramdisk help same system reach 24-26 minutes…improvement is not as much as my old server… but still some benifit.

my point is windows ramdisk did not improve time compared with NVME, more helpful on reduce the TBW for NVME…
If you intend to use ramdisk ,DDR3 with old server is greately cost efficitvie and really fast in Linux…
since my 1200 USD R720XD server is now plotting just as fast as my 2500 USD 5900X PC setup

chia_plot -n 36 -r 20 -t C:\ChiaTmp1\ -2 G:\ -d D:\ -p -f

Threads: 	20
Tmp 1:		C:\ChiaTmp1\                  (1TB Sabrent Rocket 4 running at PCIe 4)
Tmp 2:		G:\ (raid 0 SN750) 	          (2 x SN750 1TB NVME in Windows stripped array - Raid 0)
Destination	L:                            (4TB HDD)

Time taken 1677s or 28 minutes = 51 plots a day

I’d like to try Madmax in Linux, but I’m a complete Linux noob, and working out how to do anything takes hours of Googling, is there a thorough how to guide including setting up a ram disk somewhere?

First create usb Ubuntu installation media (you can google how to do this I don’t remember but you need some kind of tool to do it under windows)

Install like you would windows, boot from the usb and the installer will launch. keyboard setup, region etc.
(the only thing that had me worried first time installing is that it first ask if you want to “erase disk and install” and only on the next page does it ask which drive you want to erase :sweat_smile:)
Advise to select the option “install third party software” otherwise you have to install some extra stuff to get madmax working.

When the installer finishes, you will be in the desktop, bottom left you get a list of apps you can run
now you have to create the ramdisk and install Madmax
Run the terminal app and first create a directory by this command:
sudo mkdir /mnt/ram
then mount the ramdisk to that location:
sudo mount -t tmpfs -o size=110G tmpfs /mnt/ram/ (you have to do this every time you reboot the pc)

Just follow the install instructions on madmax github page, copy paste the command lines one by one and that’s it.
copy = ctrl+insert
paste = shift+insert

Now you’re basically set, but you will have to get used to the files system a bit. Linux doesn’t use drive letters like Windows but folders.

so the full path to the ramdisk you just created will be /mnt/ram/

other disks will have to be mounted(activated) first, you can do this in the disk app by clicking the > symbol.
typically the path will then be /media/yourusername/partitionlabel/

cd is change directory just like windows, but you can also go in the file explorer, right click the directory and select “open in terminal”

1 Like

“For fun, I thought I’d try two parallel MM instances last night and did two x 14 plots or 28 plots made, with an average plot time of 22.44 minutes each.”

Setup: TR Pro 3955WX (16c/32t) 64GB - each MM instance has 870 Evo plus 1tb as -t, one had 980 Pro 1tb as -2, other had Silicon General US70 1tb as -2 and 8 threads/instance only. No RAMdisk, no R.A.I.D, nothing special. Memory use was minimal <25GB. Maybe three instances with 5 thread per would be even better?

.\chia_plot.exe -n ‘x’ -r 8 -u 256 -v 256 -t s:\ -2 u:\ -K 2 -d x:\ -f ‘xxx’ -c ‘xxx’

24hrx60min=1440 min/22.44 min/plot = ~64 plots/day

Thanks, I’ll give it a try sometime, I have a spare SSD to install on.

Installed Unbuntu, got RDP working, setup a ram disk, and mounted my 3 x S3710 200GB SSD raid.

Number of Plots: 1
Crafting plot 1 out of 1
Process ID: 14015
Number of Threads: 18
Number of Buckets P1: 2^8 (256)
Number of Buckets P3+P4: 2^7 (128)
Pool Public Key:
Farmer Public Key:
Working Directory: /media/chiamining/IntelRaid0/
Working Directory 2: /mnt/ramdisk/
Plot Name: plot-k32-2021-08-07-23-50-0cf995468c15e4502075760c8c771cf4e9786047933bd222c9e9d1896fcd7000
[P1] Table 1 took 10.3658 sec
[P1] Table 2 took 107.454 sec, found 4294972186 matches
[P1] Table 3 took 116.241 sec, found 4294939769 matches
[P1] Table 4 took 149.154 sec, found 4294857166 matches
[P1] Table 5 took 144.882 sec, found 4294837600 matches
[P1] Table 6 took 139.92 sec, found 4294693068 matches
[P1] Table 7 took 107.253 sec, found 4294323997 matches
Phase 1 took 775.284 sec
[P2] max_table_size = 4294972186
[P2] Table 7 scan took 10.1827 sec
[P2] Table 7 rewrite took 29.5417 sec, dropped 0 entries (0 %)
[P2] Table 6 scan took 24.6915 sec
[P2] Table 6 rewrite took 42.7078 sec, dropped 581312049 entries (13.5356 %)
[P2] Table 5 scan took 28.8993 sec
[P2] Table 5 rewrite took 53.7792 sec, dropped 761997452 entries (17.7422 %)
[P2] Table 4 scan took 31.5449 sec
[P2] Table 4 rewrite took 40.0991 sec, dropped 828829447 entries (19.2982 %)
[P2] Table 3 scan took 31.3667 sec
[P2] Table 3 rewrite took 39.7457 sec, dropped 855087655 entries (19.9092 %)
[P2] Table 2 scan took 33.8771 sec
[P2] Table 2 rewrite took 39.254 sec, dropped 865582831 entries (20.1534 %)
Phase 2 took 421.838 sec
Wrote plot header with 268 bytes
[P3-1] Table 2 took 23.9975 sec, wrote 3429389355 right entries
[P3-2] Table 2 took 26.8031 sec, wrote 3429389355 left entries, 3429389355 final
[P3-1] Table 3 took 41.4292 sec, wrote 3439852114 right entries
[P3-2] Table 3 took 27.4749 sec, wrote 3439852114 left entries, 3439852114 final
[P3-1] Table 4 took 43.1801 sec, wrote 3466027719 right entries
[P3-2] Table 4 took 27.0932 sec, wrote 3466027719 left entries, 3466027719 final
[P3-1] Table 5 took 43.9725 sec, wrote 3532840148 right entries
[P3-2] Table 5 took 28.2409 sec, wrote 3532840148 left entries, 3532840148 final
[P3-1] Table 6 took 45.5923 sec, wrote 3713381019 right entries
[P3-2] Table 6 took 29.0528 sec, wrote 3713381019 left entries, 3713381019 final
[P3-1] Table 7 took 29.2331 sec, wrote 4294323997 right entries
[P3-2] Table 7 took 34.4663 sec, wrote 4294323997 left entries, 4294323997 final
Phase 3 took 405.008 sec, wrote 21875814352 entries to final plot
[P4] Starting to write C1 and C3 tables
[P4] Finished writing C1 and C3 tables
[P4] Writing C2 table
[P4] Finished writing C2 table
Phase 4 took 70.1642 sec, final plot size is 108827243588 bytes
Total plot creation time was 1672.37 sec (27.8728 min)

That’s getting on for 52 plots a day, and about 4 minutes faster than Windows, which was 32 minutes or 45 plots a day.

There is information on this page how to mount the ram disk with each reboot automatically.

Will I be sticking with Unbuntu, probably not unless I need to plot a lot of plots, its just not worth the effort for me to learn how to do everything, with WIndow’s I know how things work. I’ve just spent 15 minutes trying to access the file share on my server to move the plot off the workstation - it will be quicker to boot it into Windows as I formatted the drive NTFS.