Here’s a bunch of plot data from 1.0.4 from different CPUs and mostly the same hard drives, all “fast” consumer SSD drives from WD and Samsung, so you can get a sense of how much CPU matters.
spoiler… CPU speed matters a LOT for plotting. Way more than I thought it would!
Let’s share our 1.0.4+ plot times, be sure to list
CPU TYPE
DISK TYPE
PLOT PARAMS
ONE LINE PER SIMULTANEOUS PLOT
one line per simultaneous plot, the total times listed indicates how many simultaneous plots I am doing on that machine. If you see 3 lines, it means 3 simultaneous plots. If you see 2 lines, it means 2 simultaneous plots, etc
Total time = 20808.106 seconds. CPU (121.950%) Tue Apr 13 21:26:05 2021
Total time = 21658.726 seconds. CPU (122.760%) Tue Apr 13 22:20:39 2021
Total time = 23117.507 seconds. CPU (121.740%) Tue Apr 13 23:24:46 2021
Total time = 23868.739 seconds. CPU (122.050%) Wed Apr 14 00:17:25 2021
Total time = 25380.291 seconds. CPU (122.590%) Wed Apr 14 02:42:44 2021
Total time = 25678.828 seconds. CPU (122.750%) Wed Apr 14 03:27:49 2021
Intel NUC i7-10710U 6c/12t - nvme 980 pro / SATA 860 pro (plots 4t/16gb)
Total time = 25605.834 seconds. CPU (146.080%) Wed Apr 14 03:16:56 2021
Total time = 25814.082 seconds. CPU (146.820%) Wed Apr 14 03:30:32 2021
Total time = 27823.670 seconds. CPU (136.220%) Wed Apr 14 04:14:21 2021
Total time = 29516.922 seconds. CPU (137.100%) Wed Apr 14 08:23:29 2021
Total time = 29697.803 seconds. CPU (135.990%) Wed Apr 14 08:36:39 2021
(note on this one – the last 3 plots are all on the slower SATA SSD drive and that definitely affects the time; I suspect if it was another NVME drive we’d see closer to 25k/26k secs of the first 2 plots which are on the NVME drive)
chia1.log:Total time = 22195.975 seconds. CPU (130.900%) Tue Apr 13 05:16:04 2021
chia1.log:Total time = 30997.411 seconds. CPU (114.750%) Tue Apr 13 14:00:40 2021
chia1.log:Total time = 34696.695 seconds. CPU (113.760%) Tue Apr 13 23:46:45 2021
chia1.log:Total time = 34511.367 seconds. CPU (113.480%) Wed Apr 14 09:29:48 2021
chia2.log:Total time = 25203.357 seconds. CPU (122.790%) Tue Apr 13 08:36:31 2021
chia2.log:Total time = 33542.550 seconds. CPU (113.870%) Tue Apr 13 18:03:31 2021
chia2.log:Total time = 34434.983 seconds. CPU (114.160%) -Wed Apr 14 03:45:14 2021
chia3.log:Total time = 29961.245 seconds. CPU (115.510%) Tue Apr 13 12:35:40 2021
chia3.log:Total time = 34528.778 seconds. CPU (113.980%) Tue Apr 13 22:18:56 2021
chia3.log:Total time = 34534.393 seconds. CPU (113.450%) Wed Apr 14 08:02:24 2021
chia4.log:Total time = 32188.694 seconds. CPU (114.060%) Tue Apr 13 15:32:38 2021
chia4.log:Total time = 34561.618 seconds. CPU (113.810%) Wed Apr 14 01:16:25 2021
chia4.log:Total time = 34375.208 seconds. CPU (113.390%) Wed Apr 14 10:57:17 2021
chia5.log:Total time = 33853.631 seconds. CPU (113.700%) Tue Apr 13 18:30:22 2021
chia5.log:Total time = 34416.111 seconds. CPU (114.330%) Wed Apr 14 04:11:47 2021
chia6.log:Total time = 34698.293 seconds. CPU (113.190%) Tue Apr 13 21:14:27 2021
chia6.log:Total time = 34499.803 seconds. CPU (113.420%) Wed Apr 14 06:57:18 2021
Huge difference in plot times between the first (when jobs were still ramping up) to the later ones after all 6 jobs are cranking. The math says that if I decrease parallel jobs from 6 to 5, the individual plot times must decrease from ~34500 to ~28900 to maintain overall throughput. I suppose I can try it by killing off a job… Hmm…
i7-3770k (Ivy Bridge), 4c/8t, 16GB RAM, 2TB Inland NVMe, 4 parallel jobs, 3 threads/job (-r 3), 150 min stagger time
chia1.log:Total time = 20769.398 seconds. CPU (121.720%) Tue Apr 13 05:06:16 2021
chia1.log:Total time = 28778.858 seconds. CPU (115.270%) Tue Apr 13 13:15:36 2021
chia1.log:Total time = 29835.072 seconds. CPU (114.370%) Tue Apr 13 21:42:22 2021
chia1.log:Total time = 29962.063 seconds. CPU (114.740%) Wed Apr 14 06:11:21 2021
chia2.log:Total time = 25651.270 seconds. CPU (113.760%) Tue Apr 13 08:57:38 2021
chia2.log:Total time = 30274.674 seconds. CPU (113.190%) Tue Apr 13 17:31:33 2021
chia2.log:Total time = 30302.744 seconds. CPU (113.060%) Wed Apr 14 02:06:08 2021
chia2.log:Total time = 29911.308 seconds. CPU (113.520%) Wed Apr 14 10:34:17 2021
chia3.log:Total time = 28619.745 seconds. CPU (114.830%) Tue Apr 13 12:17:06 2021
chia3.log:Total time = 29451.964 seconds. CPU (114.470%) Tue Apr 13 20:37:23 2021
chia3.log:Total time = 30256.464 seconds. CPU (114.120%) Wed Apr 14 05:11:10 2021
chia4.log:Total time = 29967.836 seconds. CPU (114.420%) Tue Apr 13 15:09:35 2021
chia4.log:Total time = 29936.327 seconds. CPU (114.190%) Tue Apr 13 23:37:59 2021
chia4.log:Total time = 30099.944 seconds. CPU (113.880%) Wed Apr 14 08:09:23 2021
HPE DL360 G10, 2 x Xeon Gold 6132 (14 Cores), 128GB RAM, 2 x 12TB NVMe (PCIE 3) RAID0
I do all my plotting via the command line in Windows 10. I run unregistered Windows 10 pro for now on all my plotting boxes, and I’m gonna push it and see how long I can continue to do that
I have one main machine that farms, my home theater PC, it does a tiny bit of plotting but I will cut that out over time to prevent too much drive wear.
Everything hums along as far as plotting goes; I need to figure out better automated harvesting, though! Longer term I am moving that recommended old supermicro 45 drive JBOD into a local datacenter in Berkeley, and I’m figuring that out as I go… I don’t want a ton of drives at my house forever. Stuff for another topic, when you move to hosting stuff in racks at a datacenter!!
I tried changing my parallelism a bit. I started plotting directly onto 2 HDDs, and reduced parallelism on one of the 980 Pros, and one of the SN750s. To my surprise, my plots per day were unchanged. I am still getting ~31.5 plots per day; even things that were unchanged (e.g. the remaining 980 Pro at 3 parallel) were slower.
Surprisingly, 32 plots per day is also what I get with my Ryzen 5800X system, with 2x2TB Gigabyte Aorus Gen4 NVMes + Samsung 970 Pro 1TB, and various degrees of HDD plotting (from 0 to 3). This is despite my 5800X having 4 less cores than the 3950X.
(I have more or less tried EVERY combination or permutations of NVMes, SSDs, HDDs, parallels, etc… and I get 31-32 plots per day on both AM4 systems; as long as I have enough NVMes to run enough threads.
This makes me wonder: is the AM4 platform bottlenecking at 32 plots per day? Perhaps the DRAM channel bandwidth? Perhaps the X570 chipset can’t make use of multiple NVMes and is I/O bound?
I would love to get more feedback / experiences on if you can break the 32 plot per day mark on AM4. Because all of our time comparisons are irrelevant if there is a ‘hidden’ bottleneck
Jeff it’s always good to see the “little guys” getting into Chia. Not sure if you’ve hopped into the official keybase yet, but we’ve been keeping a system performance comparison data-set since the late betas. You can find it here: chia plotting performance.xlsx - Google Sheets
Boba are you running optimally tuned DRAM on your Ryzen rigs. I do know at least on the gaming side you can get substantial increases on 1st/2nd gen ryzen from manually tuning DRAM.
You could look into Ryzen DRAM Calculator and guides relating to it if you want to see if it’ll speed things up for you. I’m just getting started plotting and my system is fairly well tuned so I’ll try and find out what my results are for comparison. R5 3600.
Total time = 26235.440 seconds. CPU (133.800%) Mon Apr 19 00:15:55 2021
Total time = 25311.904 seconds. CPU (134.070%) Mon Apr 19 07:29:56 2021
Total time = 27298.254 seconds. CPU (133.380%) Mon Apr 19 15:18:03 2021
Total time = 26961.730 seconds. CPU (133.060%) Sun Apr 18 16:47:25 2021
plot 2/9
Total time = 27156.522 seconds. CPU (132.680%) Sun Apr 18 17:17:28 2021
Total time = 25777.321 seconds. CPU (134.060%) Mon Apr 19 00:38:44 2021
Total time = 25110.607 seconds. CPU (134.370%) Mon Apr 19 07:45:11 2021
Total time = 27209.414 seconds. CPU (133.240%) Mon Apr 19 15:26:49 2021
plot 3/9
Total time = 27316.069 seconds. CPU (132.560%) Sun Apr 18 17:44:33 2021
Total time = 25832.624 seconds. CPU (134.910%) Mon Apr 19 01:06:47 2021
Total time = 25309.082 seconds. CPU (134.570%) Mon Apr 19 08:21:00 2021
Total time = 27020.246 seconds. CPU (133.240%) Mon Apr 19 16:04:43 2021
plot 4/9
Total time = 28002.689 seconds. CPU (129.530%) Sun Apr 18 17:53:54 2021
Total time = 25737.511 seconds. CPU (134.330%) Mon Apr 19 01:15:50 2021
Total time = 25580.768 seconds. CPU (134.050%) Mon Apr 19 -08:30:40 2021
Total time = 27368.038 seconds. CPU (132.010%) Mon Apr 19 16:15:45 2021
plot 5/9
Total time = 27147.972 seconds. CPU (129.890%) Sun Apr 18 14:11:40 2021
Total time = 26867.865 seconds. CPU (131.800%) Sun Apr 18 21:50:01 2021
Total time = 25822.351 seconds. CPU (131.280%) Mon Apr 19 05:12:20 2021
Total time = 27687.093 seconds. CPU (128.110%) Mon Apr 19 13:06:38 2021
plot 6/9
Total time = 25964.157 seconds. CPU (128.560%) Sun Apr 18 21:23:45 2021
Total time = 25622.896 seconds. CPU (131.940%) Mon Apr 19 04:38:35 2021
Total time = 27269.767 seconds. CPU (127.520%) Mon Apr 19 12:21:09 2021
plot 7/9
Total time = 27729.357 seconds. CPU (130.480%) Sun Apr 18 18:27:53 2021
Total time = 25364.418 seconds. CPU (133.250%) Mon Apr 19 01:42:34 2021
Total time = 25869.598 seconds. CPU (133.490%) Mon Apr 19 09:06:23 2021
Total time = 27604.056 seconds. CPU (132.370%) Mon Apr 19 17:00:17 2021
plot 8/9
Total time = 27787.066 seconds. CPU (130.760%) Sun Apr 18 18:20:39 2021
Total time = 25767.090 seconds. CPU (133.790%) Mon Apr 19 01:42:41 2021
Total time = 25605.343 seconds. CPU (133.970%) Mon Apr 19 08:59:51 2021
Total time = 27546.598 seconds. CPU (132.560%) Mon Apr 19 16:47:40 2021
plot 9/9
Total time = 26269.964 seconds. CPU (132.280%) Sun Apr 18 22:05:44 2021
Total time = 23608.046 seconds. CPU (130.870%) Mon Apr 19 04:47:04 2021
Total time = 25770.912 seconds. CPU (126.490%) Mon Apr 19 12:04:49 2021
So pretty much 7.5 hours per plot or 27k seconds. That maths out to 28.8 plots per day I guess?
I figure there are 16 cores, 32 threads, but remember a thread isn’t a “real” core more like 0.5 cores, so … let’s call it 32 cores. 32 / 4 = 8 so 9 seems about right to me.
(edit: I am changing this to 12 plots, since I think I am under-utilizing the CPU here. I made the change tonight, I’ll post the results tomorrow to see what the change is, going from 9 simultaneous to 12.)
Weird tangential memory issue, click to expand
My memory on my Ryzen is also on the X570 set to “auto” and it’s reporting only 2400Mhz (?). I bought this system from CyberpowerPC and specified 3200Mhz memory…
MEMORY: 64GB (16GBx4) DDR4/3200MHz Dual Channel Memory [+480] (Performance Memory by Major Brands)
I tried setting it to 3200Mhz in the BIOS (it is supposedly rated for that) and… bluescreen. Then I tried 3000Mhz. Bluescreen. 2800Mhz… runs for a bit, then I did a quick prime95 test… bluescreen. 2600Mhz… ditto. I left it on auto because whoo boy I don’t need that kinda stress in my life!
20650.627 seconds. CPU (121.690%) Tue Apr 20 01:26:40 2021
20917.100 seconds. CPU (125.180%) Tue Apr 20 03:23:38 2021
Notes: It would appear my worst time is still 15 minutes faster than any Ryzen CPU listed in this thread. I’m using an ADATA Swordfish for the SSD so nothing special there.
I suspect a high probability is that either the efforts made on tuning Ryzens memory system would explain the difference or that the limiting factor is the I/O Drives. I already know that my particular Ryzen CPU is a low quality sample unsuitable to overclocking of the CPU cores so I don’t have an advantage there.
Additionally the PC is not only being used for CHIA, but is also running an Ethereum miner and a Storj Node concurrent with the CHIA Plotting, so it is possible those may further limit my Plotting potential, although somewhat unlikely as the Miner uses virtually no CPU and the access on the Storj node is quite infrequent and small.
You need to enable XMP profiles on the motherboard. There should be an easy, one-click setting for this. On ASUS motherboards, they call it “DOCP”.
All RAM, even those rated for higher speeds, run at stock DDR4 speeds and timings unless you enable XMP.
XMP does more than change the speed: it also increases voltage from 1.2V to 1.35V, and may adjust some sub-memory timings. This is why your manual attempts to change the memory speed did not work. XMP settings are safe to run.
You will likely notice 10-20% faster speeds with XMP enabled, as Ryzen’s architecture is very memory latency sensitive. Its chiplets run on Infinity Fabric, which works at the speed of DDR4 memory; so if you increase from 2400MHz to 3200MHz, you are giving a 33% boost to inter-core communication bus of Ryzen.
Yeah, I tried that too – sadly, bluescreen. I emailed cyberpower tech support with all the details and cpu-z screenshots of the memory settings. I’ll take that to a different topic since it’s kinda off topic here. I also don’t think memory speed matters very much for plotting, compared to single-thread CPU speed (matters a LOT!) and disk speed (also matters a LOT, but single-thread CPU speed seems to matter more).
Yep 2 is all I can do simultaneously with the size of my SSD, but I did include that as the alternate reason if memory isn’t the reason, which I unfortunately can’t test myself.
Dude!! So we’ve determined that plotting might be a better memtest than memtest! I discovered the same issue with my rig, I didn’t realize the XMP setting on my ASUS motherboard was over-clocking my RAM and the issue was really subtle so took me a week to figure out. In my case 1 out of every 10 plots would just stall and never complete. No reboots, not crazy instability, just an occasional plot that would never complete. Dropped the RAM settings to something more conservative “under-clocked” and bam memtest good, every plot every time!!
Also side note, my 3900x purrs at 11 parallel plots so you should definitely be able to bump it up a few. Gunna need a bit more RAM though. With 128GB I’ve seen usage peak around ~74GB
Today I ran a few sessions today of 5 x paralell plots on my Mac Mini M1 (16gb ram). I’m plotting onto a Sabrent 2TB Rocket via a ORICO Mini Thunderbolt 3 NVMe SSD External Enclosure adapter.
I actually got paranoid about internal SSD swap wear and tear on the M1 (this is actually maybe a problem even when not plotting - my numbers on brand new machine were shooting up) so installed the OS onto an external 1TB NVME in a USB-C enclosure. That enclosure is much slower than the Thunderbolt one however I don’t believe OS disk access is much of a limiting factor in plotting (however if the swap is being hit a lot… maybe?)
I’m using all default options and I’m not staggering (which would improve things I’m sure)
Total time = 23000 seconds +/- about 600 seconds. Here is a an example of the timings:
Phase 1 Time = 12841.697
Phase 2 Time = 3061.241
Phase 3 Time = 6372.12
Phase 4 Time = 414.771
Total Time = 22689.829
I’ve since landed on 4 x at once staggered with plotman so no more than two plots are in phase 1 at once. Getting 20 plots per day this way. I was running default ram (3389) and it was mainly fine but getting the OS warning about lack of application memory sometimes. To troubleshoot that I’ve reduced that to 3072 without an overall reduction in throughput. I think I will adjust it down again and try and find the point at which it does reduce speed and then leave it a bit above that.
I just learned my wife’s unused Surface Laptop 3 (she still uses her older MacBook for teaching) is pretty badass. i7, 16gb RAM…I’m running 1 plot overnight on my external 1tb NVME drive (unfortunately only 10gbps enclosure), will see what times are like on it, but after that I think I’ll be able to run 2-3 plots at once on this!