Green plotting with HDDs - results and settings

Hello Chia community,

hopefully everyone of us has thought about green plotting for replotting to portable plots.
So if you don’t want to burn your SSDs, the only two options are using HDDs or RAM as temp drive.
Plotting on ram disk is great but not supported on every platform because of max available GB ram per stick and available ram slots.
The other possible solution I wanna compare here as inspiration for others…

My test system is a 1HE Supermicro Server with:

This server is running a bunch of VMs, with MS Exchange, Database, File Server and other stuff.
I’ve using this almost idling server to replot to portable plots with a software raid0 (stripe) of the eight HGST 10K 600GB SAS HDDs attached with the DS2246 through the LSI HBA and wanna share the results.
At first (a month ago) I wanna do one plot per drive, but this a pain in the ass and I’m really impressed by the results of MadMax on ram disks and SSDs. So lets try MadMax on a RAID0 of fast SAS drives.
As comparison, I’ve attached also a log of plotting on a consumer ssd Transcend 2TB NVMe PCIe Gen3 x4 MTE220S.

Results:

  • 8x 2.5" 10K 600GB SAS HDDs RAID0 (for -t and -2): → 2627.04 sec (43.7841 min)
  • Transcend 2TB NVMe PCIe Gen3 x4 MTE220S (for -t and -2): → 2672.63 sec (44.5438 min)

Amazing that the hdd raid0 is a bit faster than the consumer ssd!

Plot logs:

8x 2.5" 10K 600GB SAS HDDs RAID0
Multi-threaded pipelined Chia k32 plotter - 2144ce1
(Sponsored by Flexpool.io - Check them out if you're looking for a secure and scalable Chia pool)

Final Directory: /mnt/chia/dst/usb01/
Number of Plots: 43
Crafting plot 1 out of 43
Process ID: 5734
Number of Threads: 24
Number of Buckets P1:    2^7 (128)
Number of Buckets P3+P4: 2^7 (128)
Pool Puzzle Hash:  d05c157c1b96179b7d97b50b042810c3e021f3b5cff23b9eed298ed94c9b0e59
Farmer Public Key: b3a4e2e2339323ed87fe55101f8387baa802b14287cd0613c24b97fd522b2d8f1cbf327ba980287442a541b6477a2100
Working Directory:   /mnt/chia/tmp/01/
Working Directory 2: /mnt/chia/tmp/01/
Plot Name: plot-k32-2021-07-16-00-03-1fafe54467cc87eefd5434af9a5cf2475c77d2690c0eea93a58b59124e443905
[P1] Table 1 took 23.5131 sec
[P1] Table 2 took 134.031 sec, found 4295014425 matches
[P1] Table 3 took 251.454 sec, found 4295096413 matches
[P1] Table 4 took 307.869 sec, found 4295143655 matches
[P1] Table 5 took 253.461 sec, found 4295089441 matches
[P1] Table 6 took 214.506 sec, found 4295066190 matches
[P1] Table 7 took 120.901 sec, found 4295023921 matches
Phase 1 took 1305.79 sec
[P2] max_table_size = 4295143655
[P2] Table 7 scan took 19.3418 sec
[P2] Table 7 rewrite took 63.3175 sec, dropped 0 entries (0 %)
[P2] Table 6 scan took 30.6122 sec
[P2] Table 6 rewrite took 42.1758 sec, dropped 581329920 entries (13.5348 %)
[P2] Table 5 scan took 31.7722 sec
[P2] Table 5 rewrite took 40.6016 sec, dropped 762034484 entries (17.742 %)
[P2] Table 4 scan took 32.2998 sec
[P2] Table 4 rewrite took 39.0165 sec, dropped 829020046 entries (19.3013 %)
[P2] Table 3 scan took 38.4502 sec
[P2] Table 3 rewrite took 39.4294 sec, dropped 855224811 entries (19.9117 %)
[P2] Table 2 scan took 28.7938 sec
[P2] Table 2 rewrite took 38.6185 sec, dropped 865607443 entries (20.1538 %)
Phase 2 took 468.602 sec
Wrote plot header with 252 bytes
[P3-1] Table 2 took 77.3432 sec, wrote 3429406982 right entries
[P3-2] Table 2 took 33.2168 sec, wrote 3429406982 left entries, 3429406982 final
[P3-1] Table 3 took 74.3853 sec, wrote 3439871602 right entries
[P3-2] Table 3 took 41.7651 sec, wrote 3439871602 left entries, 3439871602 final
[P3-1] Table 4 took 81.9276 sec, wrote 3466123609 right entries
[P3-2] Table 4 took 36.1859 sec, wrote 3466123609 left entries, 3466123609 final
[P3-1] Table 5 took 81.6977 sec, wrote 3533054957 right entries
[P3-2] Table 5 took 37.1073 sec, wrote 3533054957 left entries, 3533054957 final
[P3-1] Table 6 took 88.7058 sec, wrote 3713736270 right entries
[P3-2] Table 6 took 41.7644 sec, wrote 3713736270 left entries, 3713736270 final
[P3-1] Table 7 took 110.597 sec, wrote 4295023921 right entries
[P3-2] Table 7 took 68.4815 sec, wrote 4294967296 left entries, 4294967296 final
Phase 3 took 779.275 sec, wrote 21877160716 entries to final plot
[P4] Starting to write C1 and C3 tables
[P4] Finished writing C1 and C3 tables
[P4] Writing C2 table
[P4] Finished writing C2 table
Phase 4 took 73.1166 sec, final plot size is 108835562858 bytes
Total plot creation time was 2627.04 sec (43.7841 min)
Transcend 2TB NVMe PCIe Gen3 x4 MTE220S
Multi-threaded pipelined Chia k32 plotter - 2144ce1
(Sponsored by Flexpool.io - Check them out if you're looking for a secure and scalable Chia pool)

Final Directory: /mnt/chia/dst/usb01/
Number of Plots: 58
Crafting plot 1 out of 58
Process ID: 240245
Number of Threads: 24
Number of Buckets P1:    2^7 (128)
Number of Buckets P3+P4: 2^7 (128)
Pool Puzzle Hash:  d05c157c1b96179b7d97b50b042810c3e021f3b5cff23b9eed298ed94c9b0e59
Farmer Public Key: b3a4e2e2339323ed87fe55101f8387baa802b14287cd0613c24b97fd522b2d8f1cbf327ba980287442a541b6477a2100
Working Directory:   /mnt/chia/tmp/00/
Working Directory 2: /mnt/chia/tmp/00/
Plot Name: plot-k32-2021-07-15-10-13-a6c3056a621f2eb908153c9e1323cfcbd9971c0d84e98ed9d28de56ff8205d5c
[P1] Table 1 took 14.0496 sec
[P1] Table 2 took 139.211 sec, found 4294942687 matches
[P1] Table 3 took 210.217 sec, found 4294851804 matches
[P1] Table 4 took 266.267 sec, found 4294753378 matches
[P1] Table 5 took 243.721 sec, found 4294587307 matches
[P1] Table 6 took 227.242 sec, found 4294171001 matches
[P1] Table 7 took 160.012 sec, found 4293364494 matches
Phase 1 took 1260.76 sec
[P2] max_table_size = 4294967296
[P2] Table 7 scan took 30.7946 sec
[P2] Table 7 rewrite took 70.6706 sec, dropped 0 entries (0 %)
[P2] Table 6 scan took 24.8989 sec
[P2] Table 6 rewrite took 53.1549 sec, dropped 581389708 entries (13.539 %)
[P2] Table 5 scan took 33.8021 sec
[P2] Table 5 rewrite took 47.0172 sec, dropped 762106685 entries (17.7457 %)
[P2] Table 4 scan took 30.6085 sec
[P2] Table 4 rewrite took 50.2495 sec, dropped 828937145 entries (19.3012 %)
[P2] Table 3 scan took 32.5297 sec
[P2] Table 3 rewrite took 47.9677 sec, dropped 855137814 entries (19.9108 %)
[P2] Table 2 scan took 25.7271 sec
[P2] Table 2 rewrite took 47.2221 sec, dropped 865631538 entries (20.1547 %)
Phase 2 took 515.284 sec
Wrote plot header with 252 bytes
[P3-1] Table 2 took 72.7903 sec, wrote 3429311149 right entries
[P3-2] Table 2 took 45.6565 sec, wrote 3429311149 left entries, 3429311149 final
[P3-1] Table 3 took 79.6803 sec, wrote 3439713990 right entries
[P3-2] Table 3 took 44.2588 sec, wrote 3439713990 left entries, 3439713990 final
[P3-1] Table 4 took 78.7445 sec, wrote 3465816233 right entries
[P3-2] Table 4 took 40.5394 sec, wrote 3465816233 left entries, 3465816233 final
[P3-1] Table 5 took 80.9991 sec, wrote 3532480622 right entries
[P3-2] Table 5 took 41.5865 sec, wrote 3532480622 left entries, 3532480622 final
[P3-1] Table 6 took 84.2511 sec, wrote 3712781293 right entries
[P3-2] Table 6 took 43.6553 sec, wrote 3712781293 left entries, 3712781293 final
[P3-1] Table 7 took 100.558 sec, wrote 4293364494 right entries
[P3-2] Table 7 took 98.8718 sec, wrote 4293364494 left entries, 4293364494 final
Phase 3 took 816.346 sec, wrote 21873467781 entries to final plot
[P4] Starting to write C1 and C3 tables
[P4] Finished writing C1 and C3 tables
[P4] Writing C2 table
[P4] Finished writing C2 table
Phase 4 took 80.0751 sec, final plot size is 108813448993 bytes
Total plot creation time was 2672.63 sec (44.5438 min)

Used cmd:

nice -n 19 chia_plot -f xxx -c xxx -t /mnt/tmp/01/ -r 24 -u 128

So what are your results for green plotting on HDDs?

2 Likes

I was able to achieve 28 minute plots with a 16x raid array of 10k drives, my 2x NVMe in raid 0 only managed 32 minutes, and tmpfs 24 minutes. Incidentally, 6x 900gb drives with 128mb cache perform almost as well as 10x 600gb drives with 64mb cache, so it seems to me drive cache plays a very large role in performance. Some 16 and 32mb cache drives I had tested were absolutely terrible…

2 Likes

I am using 16x 15k 300GB SAS drive using Madmax plotter running in 5950x and 64GB. Running them in parallel with 23min stagger, I am getting about 55 plots a day average after the initial day run.

1 Like

So you’re doing 16 plots in parallel?
Have you also tried some combinations of raid 0?

I use 2 HBAs for 16 drives running in headless ubuntu server. Yes, 16 plots in parallel in default madmax setting. I am still tweaking the setting in hoping to achieve 59 plots per day.

I haven’t tried raid0 after reading raid0 yield mixed performance from the forum. TBH I haven’t done any RAID before, I might put pairs of SAS into RAID0 when K33 becomes available from Madmax where 600GB is needed for the plotting.

In my testing with individual drives on mad max, it would likely yield slightly better results than the same number of drives in a raid array, however, that did not take the file copy time into account, which may even out the final results to be about equivalent. It is certainly simpler to run a single large raid array, and it may not reduce plots/day by a significant amount (if at all).

1 Like