Switching from the default plotter to Madmax brought my plotting down from 9 hours each to 27 minutes and I am super happy with that. I have an i9 7940X and a Plextor M9Pe 1TB.
I’m wondering how much I’m leaving on the table, though. What kind of plotting rates are y’all getting with => 16 real cores and RAM disks? Spending even just $1500 - $2000 on a dedicated plotting rig looks on paper like I could do much better than 27 minutes but I’m curious to hear from others about real-world experiences. My ROI timeframe is > 5 years so let’s just call it a hobby.
Best value for money are still older Dual Xeon systems. Those server type systems tend to have a lot of RAM on board or at least a lot of room for ram.
I have a Dual Xeon E5-2680v2 so 20 cores, 40 threads total. 256GB DDR3.
Using 110GB Ramdisk as temp 2 and nvme as temp 1, I got like 25 minute plots. (Linux)
My 5900x was doing about 29 minutes plots using 2x WD black SN750 1TB nvme as tempdrives.
I think in order to really get significant increase beyond 27minutes per plot you need to spend a ton of money on a system. economically it makes more sense to just get a second plotter that can do +/-30 minutes if you want to plot faster.
P.S. If you search around on the forum you will find a lot of topics on plot times and systems for Madmax.
27 minutes is pretty darn good! I wish I had those time frames. You are really not going to gain much more time without spending stupid money. JonMichael put out some videos on this. I would not change a thing and enjoy the ride vs most all us where we were taking hours. LOL Good luck.
Spending money on a to build a super-fast plotter is akin to paying for overnight shipping. It’s an investment with little, if any long term value, and one that will be hard to recoup especially since plotting is no longer a race (and hasn’t been since last summer). Unless you have hundreds of a few PBs of space to fill, repurposing cheap, old servers are the way to go.
Chia is an investment and plotting is a cost center that impacts that return. To accomplish that I measure it as a cost-per-plot.
Case in point… I plotted 2200 plots last year using high end hardware when Chia was insanely overvalued. When I finished plotting those machines repurposed for use in a compute cluster. I decided to add more plots this winter, but my old plotters are doing other things now. Since it’s no longer a race, I decided to
use an old SuperMicro from 2012 I had lying around. It has dual Opteron 6274 16 core CPU’s. For someone looking to purchase this, they’re found in the $2-400 range. The server already had 64GB of memory, so I had to buy a few more sticks. Those are plentiful on eBay which I found for $9 each with free shipping. I also purchased a pair of 4x PCIE NVME carriers and a pair of 1GB NVME’s, plus heatsinks to put on them.
2 NVME PCIE Cards : $36
2 NVME Heatsinks: $24
2 1TB WD Black NVME $230
Total cost: $380
It is running Madmax for RAM plotting (tmp1), the NVME’s in a RAID0 stripe (tmp2). No parallel plotting is used. Madmax is configured for same location for the final plot dir to eliminate the overhead of transferring plots to a separate drive, allowing the next plot to start immediately… (I use a script for transferring files).
Plot times range 48-52 minutes or about 30 per day. It’s half of what my old setup accomplished and certainly won’t win any plotting races… However, with an investment of less than $400, and my plan to add at least 1000 plots with that rig, my cost/plot is around $0.38, if I stop at 1000, and only goes down if I decide to go beyond that.
I bought old PC i7 4770 for around $80 each. I added a 500GB NVME for $50-60, adapter card is <$10 from amazon.
So, one of this plotter is $150. It can do <110 minute a plot. I add option -w in MadMax. Including copy time it is 2 hours a plot. One day output is 12 plots.
I can have four of these running at the same time. 48 plots a day. $600 dollar plotting hardware 30 minutes a plot.
bladebit, 14 mins per plot on average, might be able to go lower with debian what @RobbieL811 tested. madmax machine is about 33 mins per 2 plot with faster CPU so bladebit definitely faster and taking full advantage of hardware, better than madmax can.
I was desperate for an upgrade this year as I was on a i7 3930k + ASUS Rampage IV Extreme since 2012. This was a very good workhorse, but over the past year I found myself modding BIOS just to get basic stuff working like booting from NVMe add-on card. I wanted to stick with Intel but was disappointed with them over the past few years so I went with AMD this time around even though they did disappoint me in the Bulldozer era but I could not ignore the reviews.
Just around the time when Chia went mainnet I was able to obtain a 5950X + ASUS X570 ROG Crosshair VIII Dark Hero at my “local” Microcenter and was blown away by the performance. I started plotting on a Samsung Evo 970 Plus but was concerned that it will not last too long so got some PNY CS3030 (before they silently changed the TBW) and used them in bifurcation via an add-on card since this MB supports it. This was ok but as soon as MadMax came out I maxed out the RAM and seen it spit out plots at ~26min in Win 10. Then I found out that Linux performed much better with MadMax so I installed PopOS in dual boot and on Linux I’m still doing ~21min a 32K plot. Since then MadMax added support for 33K & 34K and it has a much better performance with these as well compared to original plotter and you can see my stats for these at the post below. All in all I’me very happy with may upgrade and the fact that I was able to reuse by older GPU saved me and kept this upgrade <2k I considered Threadripper but did not want a space heater warming up my room and did not have the need for all of the PCIe lanes, plus it was cost prohibitive for my needs. From what I’ve seen from other posts TR does not do much better with MadMax then my current CPU so I think I made a good decision.
I have a few Crucial P5’s and you are right. They have no cache ram which is one of the main reasons they’re slow. The WD Black’s have a decent cache, I believe I read somewhere that it is 256MB of cache for every 512GB of storage.
Cache doesn’t matter at all for plotting nvme! You will fill it it within a few minutes and then it’s useless.
The metric you want to be looking at is sustained write speed
You wont find it on spec sheets but you can find it in reviews, Tomshardware usually includes it in their ssd reviews.
Better yet, just search this forum for confirmed good plotting ssd’s
WD black sn750 i can confirm myself to be good
I apologize but that is bad advice. Caching schemes are exactly how an NVME devices manage to high throughput. Lets look more closely at the 1TB version of the SN850. While this is at heart a TLC device, it uses a hybrid storage/caching model… It has a 12TB static SLC cache for handling immediate writes as well as an additional 300GB of onboard SLC storage which has faster write speeds than TLC. This is exactly why the SN850 has really good write performance when plotting.