Chia plotting performance under kvm, lxc, and native

Hey all.

I’m setting up a chia farm and wanted to log my progress, as well as hopefully get some feedback in the process. I plan to set up the farm under various conditions, i.e… windows kvm guest w/ virtio, windows kvm guest with vfio, lxc on zfs, and native linux.

So far I’m up to the first condition: windows kvm guest w/ virtio.

Server specs are:
Dual Xeon X5690 (6C/12T ea. @ 3.47 Ghz)
96 GB DDR3 1333
1 x HDD for Host and Guest OS @ 0.5 TB
2 x Intel NVMe for temp storage
8 x WD 16 TB

VM specs are:
2 Sockets 24 Cores in ‘Host’ mode
48 GB Ram
All disks passed through at block device level with virtio and writeback cache
All disks are formated ntfs with default settings and assigned a letter

Method: I will start a single plot with -b 6000 and measure the time it takes to complete phase 1 table 2, which per the wiki represents about 6% of a plot under various other conditions.

Results and Notes:

At first, I forgot to enable cache=writeback for the VM. This was before I’d resolved to test various layouts, and would have left it, had performance been decent. It was not decent. If I recall correctly, it took about 9000 seconds to get through Phase 1 Table 2. That works out to a 41 hour plot time.

Aside from enabling cache, I also tried:
-increasing from 3 threads per plot to 4
-having windows partition block size match the media
-passing cpu=host to the guest so that bitfield would work
-with and without bitfield

The biggest difference came from enabling cache. Now I can get through table 2 in about 3000 seconds. Sadly, that still works out to about 13 hours per plot.

Surprisingly, the second biggest difference came disabling bitfield with cpu=host passed to the vm. That reduces the time to get through table 2 to about 2500 seconds (table 1 takes longer but table 2 is faster). That works out to under 12 hours a plot, which is ok, but still less than I want, and it remains to be seen if this will scale to 6 or 7 parallel threads (initially it did, but it was so slow without cache).

Matching the vm’s block size to the media made no obvious difference, but I did not run a full test before reverting back to default block size…

I’ve never seen latencies like this either. The response time for the nvme array in Task Manager for the windows guest has spiked higher than 50000 ms. I’m not positive, but it seems cpu=host helped with that, but response times are still spiking as high as 5000 ms. I’ve never tried a workload like this, but that seems like it can’t be good. I’m interested to see how that changes with vfio.

./chia plots create -k 32 -b 6000 -r 4 -n 145 -e -t z:\ -d f:

Starting phase 1/4: Forward Propagation into tmp files... Wed Jun  9 17:58:12 2021
Computing table 1
F1 complete, time: 437.415 seconds. CPU (95.58%) Wed Jun  9 18:05:30 2021
Computing table 2
        Bucket 0 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 1 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 2 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 3 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 4 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 5 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 6 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 7 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 8 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 9 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 10 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 11 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 12 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 13 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 14 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 15 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 16 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 17 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 18 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 19 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 20 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 21 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 22 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 23 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 24 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 25 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 26 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 27 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 28 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 29 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 30 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 31 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 32 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 33 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 34 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 35 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 36 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 37 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 38 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 39 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 40 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 41 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 42 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 43 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 44 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 45 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 46 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 47 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 48 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 49 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 50 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 51 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 52 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 53 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 54 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 55 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 56 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 57 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 58 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 59 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 60 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 61 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 62 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 63 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 64 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 65 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 66 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 67 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 68 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 69 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 70 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 71 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 72 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 73 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 74 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 75 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 76 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 77 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 78 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 79 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 80 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 81 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 82 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 83 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 84 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 85 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 86 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 87 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 88 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 89 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 90 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 91 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 92 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 93 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 94 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 95 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 96 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 97 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 98 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 99 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 100 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 101 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 102 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 103 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 104 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 105 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 106 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 107 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 108 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 109 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 110 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 111 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 112 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 113 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 114 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 115 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 116 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 117 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 118 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 119 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 120 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 121 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 122 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 123 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 124 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Bucket 125 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 126 uniform sort. Ram: 5.793GiB, u_sort min: 0.563GiB, qs min: 0.281GiB.
        Bucket 127 uniform sort. Ram: 5.793GiB, u_sort min: 1.125GiB, qs min: 0.281GiB.
        Total matches: 4294850723
Forward propagation table time: 2007.665 seconds. CPU (185.760%) Wed Jun  9 18:38:58 2021

Lastly, I reduced the thread count to 3, and noticed about a 7% slowdown, with stage 1 table 1 going surprisingly quick, around 316 s, and stage 2 taking longer, at 2400s,

Next, I’ll try lxc with zfs.

Thanks for reading.

1 Like

The results with LXC were pretty good. I was able to cut the time to generate a plot to under 8 hours, based on the time it took to get past phase 1 table 2.

Unfortunately, it is scaling pretty poorly. I started with four processes, and added two more after the completing of phase one. I was only able to manage about 4 plots in 24 hours. I suspect the bottleneck is the Intel nvmes, because my cpu usage is low, while my iowait is high.

Originally, I planned on also testing natively (meaning without lxc), but I don’t really see much of a point now, since LXC shouldn’t affect disk io.

Instead, I think I’ll now test more methodically, starting with one plotting process and progressively adding more.

I’m trying to strike a balance here between research and roi.

For my the next test I let it finish a single plot. Estimating the efficiency based on getting past table two was not expected to be all that accurate, and it wasn’t. I estimated in a previous post that it would take nearly 8 hours, and it took almost 10 h 20 m.

Rather than going right to xfs or ext4 for the plotter, as I’d originally intended, I’m going to increase the amount of plotting processes by one every pass to get an idea for the point of diminishing returns, then switch to XFS or ext4.

I’m also going to spend the weekend adding another two nodes for full-time plotting/farming, and another two nodes on top of that for full- and part-time plotting. Weather they are full time or part time plotters just depends on how many ssd/nvme I can get in them, with the fastest cpu/ram nodes getting priority.

I think that in this case the nvme is the bottle neck. I have another nearly identical node (old-school X8DTH-6 dual socket lga1366 w/ X5690s @ 3.4ghz 12C/24T, plenty of ram), which I am setting up with Samsung 980s instead.

My concern is that the real bottlenecks are the busses. The RAM will be maxed out in terms of speed for the next node at 1333 mhz node, with the current node having only 1066. Also, the PCI lanes don’t go directly to the CPU, but instead go the the north bridge, as do most busses, except those that go to the south bridge, which also connects to the north bridge anyway.

If that’s the bottleneck, then even adding more ssd plotting storage on the SATA bus will not help with plotting speed much.


(Check out the VGA off the south bridge. These things have am integrated Matrox chipset vga from the 90s that was ancient even when this board came out. Running a shell command that returns a lot of text (e.g. dmesg) leads to “io wait” from the display!)

I can test for bus bottlenecks fairly easily and accurately, I think, just by setting up other nodes with more modern AMD AM4 and Threadripper Gen 1 chipsets, and pcie lanes that go directly to the cpu.

tl; dr;

SuperMicro X8DTH-6
Dual Intel Xeon X5960 (3.4ghz 2S/6C/24T)
Up to 80 Gb of RAM for plotting (DDR3, ECC, 1066 Mhz (upgradeable to 1333)
7 x PCIe 2.0 x16 (run at x8 speed)
2 x Intel nvme on pci bus (8x car running at 4x for some reason) w/ ZFS
LXC-based linux environment.

Results for a single plot (not in parallel):

Total time = 37754.463 seconds. CPU (90.780%) Sat Jun 12 14:54:10 2021

Still seems too slow. Checking how this scales and will tweak settings and try again. Also, have some additional nodes to set up to help with the plotting, some of which have faster/better busses for plotting.