Ordered new Plotter Build - $5168 (UPDATED)

Nice looking build but I do want to warn you, a lot of topics don’t highlight the importance of the SSD/NVMe correctly. They focus on manufacturer stated TBW numbers and tend to point towards high TBW drives.

While that inherently isn’t wrong it also doesn’t paint the correct picture in my opinion. TBW on itself is a good number to know but without knowing for what kind of workload this is for, it’s basically meaningless… I know that’s a bold statement, especially in this community!
It’s basically about the Write Amplification Factor (writes a drive actually does to a NAND cell to perform an action) that was used while calculating this number and generally a manufacturer doesn’t state what was used. So in practice it could be that subjecting the SSDs to a “chia” workload a 600TBW drive could actually survive longer then a 1700TBW drive in real usage without writing 200TB or so to a drive with that exact workload, we won’t actually know.

With that said, it’s still not a bad number to look at but for your build another value of the NVMe drive will be more important and that’s the sustained write capabilities of the drive. As can be seen in this graph by TomsHardware

the Seagate FireCuda 520 1TB has an SLC write cache (all TLC drives do) but then drops off to quite low levels of around 500MB/sec.

What this means in practice is that once you start to load the drive with more then let’s say 2 or maybe 3 plots in parallel performance will tank since the drive no longer has time to recover. It will try to perform your read and write requests while also managing it’s background processes but basically, grinding the whole setup to a halt. This is where plot times of “12 Hours” and such come from.

Said in a different way, a 970Evo Plus 1TB will be able to handle 3x as many plots or execute the same amount of plots (if there wouldn’t be a CPU limit) 3 times faster. With a good (non-limited) setup completing plot times under 4 hours is certainly possible, but with a few in parallel 4 to 5 hours is reasonable. The selected NVMe drives will not be able to sustain that in this case and I’m afraid that this will limit/cripple your setup more then you’d expect for the setup, having plenty of CPU and memory available.

Now I’m not saying I don’t like the drives, I actually like the Seagate brand for most stuff, and they do actually advertise their SSDs with durability and such, but performance wise I believe you will be disappointed. If part of the build is wanting to plot as fast as the hardware can handle, consider adding another pair of 1TB NVMe SSDs and running these in RAID0 with MDADM with XFS on top of them (personally tested XFS wins from EXT4 and heavily tuned ZFS). RAID0 will help with burst behavior and also you get double the cache basically. There are 15$ M2 to PCIe with heatsink little riser boards you can buy so you can easily plug in 2 extra NVMe drives. With that I believe you will be able to max out your processor and plot as fast as you can. Without, you’ll be limited to at or below half of what this processor will be capable of to get decent plotting speeds.

In the end the plots per 24Hrs is the only thing that counts, on a 5900x you should for instance be able to achieve ~50 plots per day if you tune everything right and then run into the CPU bottleneck, but you will require very high sustained NVMe/SSD performance.

14 Likes

HEY MAN! welcome aboard! :ship:

Chia’s become a sensation, like a legit boy band one. You’re build looks well researched and thoughtful, only one caution is that power supply. I just built 8 “mega” plotters from scracth for a client this weekend and the only power supply I’m trusting going forward is evga. I’d sub what you ordered for something this caliber:
https://www.newegg.com/evga-supernova-850-gt-220-gt-0850-y1-850w/p/N82E16817438199

What OS are you considering running. Note: that anything other than Ubuntu 20 server will have performance implications! Windows will give you about a 20% drop in plotting performance

Agreed, well tuned (enabled discard in mount point!) MDADM RAID0 with XFS on it also performs best in my setups vs 2x single NVME because bursts can be much higher and you double DRAM and SLC cache.

1 Like

Thank you SO MUCH for such a detailed response - I had no idea the falloff on the Seagate was so poor - I have a Samsung 980 Pro in my primary desktop that I love quite a bit so already a fan, but this makes me even more of a fan.

I see your point about blinding following TBW - certainly caching/write amplification will play a big role that I’m ignoring here.

UPDATE - just ordered 2x 980 Pros and requested cancellation on the FireCuda’s - really appreciate the timely and detailed heads up to get the build on point man… seriously means a lot.

If part of the build is wanting to plot as fast as the hardware can handle, consider adding another pair of 1TB NVMe SSDs and running these in RAID0 with MDADM with XFS on top of them (personally tested XFS wins from EXT4 and heavily tuned ZFS).

Is the favorite way to do this, the ASUS NVME PCIE RAID card I see mentioned a bunch and is currently sold out or are there other ways people like to set this up from a hardware perspective?

The mobo has 2 slots which I’ll use for now.

Thanks for the tip on XFS!

2 Likes

vandy, brother I know your posts WELL - your cataloging of the builds was one of the biggest enablers/inspirations for me.

I am already an EVGA fan (good PSU experience in my current desktop) - so to see you backing this one it’s a no brain - literally shot over to Amazon, requested a return on the unopened 650 and next-day ordered the 220-GT-0850-Y1 as specified.

I’ve seen your builds - if you are saying “X” is the right PSU, then it’s the right PSU :slight_smile:

Really appreciate you weighing in here with the assist - means a lot dude.

1 Like

Sure, glad I could help, I think the 980 Pro 1TB will do A LOT better then the FireCuda for this purpose, certainly fill up that CPU and memory and getting you fast plots. :slight_smile: . On the PCIe cards, since you aren’t using any PCI slots, you have plenty of bandwidth and room available there, get 2 of these:
image
Although, as said, I believe those 2x 980Pro 1TB will serve you just fine already.

REVISED/UPDATE
And the TBW, this is a Corsair Force MP510 “B” version, performing well, but only 600TBW rated. But the numbers:
image
So 86TB written is 4%? Then linearly that would be 2150TBW for this kind of workload. But that’s speculating a bit since I’m not 100% sure on those values yet and I also do not know if “percentage used” is linear or not.

In the end, let’s see how long this stuff lasts. I feel that if you are building a plotter you are going to run it until it breaks anyway. :wink:

p.s. Last thing, get some heatsinks for those things and a bit of airflow running over them! These drives can work sustained, but only if properly cooled (which isn’t a normal desktop load).

5 Likes

Damn… that screenshot illustrates your “TBW isn’t everything” point pretty vividly - that’s interesting. If it keeps trending linearly that will be VERY interesting.

Good tip on headsinks and cooling - I’ll need to take that more seriously.

2 Likes

Nice, I was looking for a graph like that but somehow I couldn’t find a lot on the sustained write speeds.
This is very helpful.
Do you happen to know any other sources that compare other ssd’s like this?

In my first build I was too focused on the high numbers of pcie gen 4, but actually it really doesn’t seem to matter much if you use gen 3 or 4, the sustained speed is internal to the drive regardless of the protocol.

Tom’s hardware generally includes these graphs, but techpowerup also does include them, here is for the 980 Pro 1TB again:

5 Likes

This actually helped make me feel better because I’m running them with a CPU that doesn’t support Gen4 speeds… so it looks like I’ll mostly be hovering around 75% of Gen3 speeds.

Thx for the heads up on Tom’s providing this typically.

1 Like

If you are not running Gen4, the 970 Pro might be a better option. Lower speed overall than the 980 Pro. It’s pure MLC so no cliff. In fact, even if I was picking a Gen4 motherboard, if I were buying consumer drives purely to plot, I’d get the 970 Pro.

5 Likes

HDD sold in USA is cheaper than those in the manufacturing country!
It’s shipped across the World and so funny that it’s still cheaper more than 20%, with a lot more option to choose from.

Taxes, province-to-province tariffs, local channel supply pressure, competitors, customer acquisition costs, customer retention costs, warranties, strength of consumer protection laws, and also, plain human greed. Every little piece contributes to the overall price in your geographic region.

1 Like

I build another plotting rig using external SSD. Based on my reading, it should able to sustain similar speed using USB3.2 gen2x2.
The good thing is there is no TBW in warranty term! and it’s 5 year warranty.

2 Likes

Interesting points - do you have some runtime on a few 970 Pros to see if it’s wearing ok? (too @Quindor 's point above)

10 cores is a tough sell for me, when you can get 16 out of a 5950x… This is also 10th gen, not the fancier 11th gen. I think AMD basically owns this part of the market, since cores are everything in Chia plotting.

1 Like

I saw your post from a day or 2 ago to this effect and actually settled in my mind: “Jeff’s right, imma hookup the AMD build” and TOTALLY lost steam when I saw $1100 for the CPU vs $350 for the 10850k.

I even checked the older Zen 2 9-series for some big-core-count builds and same thing, $$$$.

Now on the bright side, if Chia farming isn’t a bust and this ends up working out - next build will be a threadripper because we’ll all be rich :smiley:

2 Likes

Oh well that’s pretty easy actually, TL;DR the 970 Pro is a MLC drive so in theory with the same capacity it should always last longer then a same capacity TLC equipped drive.

Now there are improvements in the process, flash chips, intelligence of the controller, etc.etc. but even with that, the per cell durability of NAND has been dropping over the years instead of increasing because we are going from SLC → MLC → TLC (Triplle Level Cell). It’s only that we now have 2TB drives instead of 256GB that the overall figures for the drives are still ok.

Durability /2 but x 2 NAND cells = 1 for drive durability, although actual durability per cell is lower. :stuck_out_tongue: So big drives are still ok, but smaller drives have really suffered in this regard. Still, for desktop usage this is all perfectly fine, no sane user writes 600TB in 5 years or even 10 years of desktop usage.

Anyway, so yeah a 1TB MLC SSD should in theory always have a better endurance then a 1TB TLC SSD. And agreed, also less of a write hole because of SLC cache that gets filled up. The 980 Pro 1TB however still does quite good when it’s cache is full, about the same as the 970 Pro does normally too. The 980 Pro can just achieve much higher bursts (which is helpful with Chia).

In my country the 970 Pro 1TB is significantly more expensive then the 980 Pro 1TB hence I went with 2x 980 Pro 1TB in R0.

3 Likes

So far, I’ve opted to have my fstrim cronjob run hourly (instead of weekly, as you would for most workloads) instead of enabling continuous discards.

My gut tells me this ought to result in better performance overall since I’m not gonna touch every block in an hour and conventional wisdom says continuous TRIM has a significant performance impact. I haven’t actually compared these approaches head-to-head, though.

Would you expect continuous TRIM to be faster than hourly TRIM?

Yeah pricing and supply is a bit of a problem… also with GPUs, but we don’t care about GPUs for this purpose!

1 Like