Enterprise Grade PCIE NVMEs

Am I the only one using enterprise grade PCIE SSD’s for plotting? My Samsung 970 plus lost 34% of life in a month. So I switched to some Enterprise gear I got a good deal on through Ebay.

I am using Samsung 480 Gb 983ZET’s right now for plotting. They are rated for 7.5 Pb TBW each.
I have a used PM1725 3.2 Tb on the way, and it is rated for 29 Pb TBW.

So far I have seen 0% usage on the 983ZET. Since it is designed for a server, I have a dedicated 120mm fan for the 983’s.

I plan on selling the 970’s that I have into the used market to recoup some $$. In total, I paid under $800 for the enterprise rated NVMEs.

Comments appreciated !

Sounds good, they will have little/no appreciable wear even after plotting a lot of plots, so you could buy them second hand and resell them afterwards - zero cost rental. Great.

However a lot of us are plotting in ram, as there is literally zero wear and its faster. Also great.

Those who have been using SSDs have run some down to 0% remaining and they are still working just fine, but have little/no honest resale value.

Where are you getting the SSD’s?

1 Like

Right now on EBay. They have many with 100,000 hours on them, with 98% life TBW left. Most have 2,000,000 hour life as well. So I’m doing exactly what you said; renting them and will resell later if Chia is gone. If Chia is here to stay I have 25 years of writing capability.

I scored a 3.2 Tb for under $500 which is cheaper than a new MB, processor and more ram.

My two big rigs are i9-9900k’s and I’m stuck at 128 Gb RAM ceiling. So I’m using RAM allotting on one but still need an SSD. And with K33, that problem will compound.

So, as they say, “there’s no better time to plant a shade tree then today.”

1 Like

Yes ram will get less practical with k33. Agreed. I am expecting a lot of warning before K33 is required, to replot at a very steady pace.


In May I bought 12*DC-S3700 used for plotting, was about 70 USD each, having around 1/7 PBW consumed. I hope they will last forever.

From my estimation, 6 of them should be sufficient for up to 8PB plots for K32.

1 Like

I am using Intel P4600/P4500 1.8TB SSD’s, IIRC they have ~10PBW and ~1.2PBW lifespan respectively. After plotting about ~140TB (ballpark estimates here), the P4600 dropped from ~75% when I recieved it to ~65%, and the P4500 dropped from ~99% to ~90%.

I will be done plotting for NFT’s in about a week’s time so I expect these drives to have plenty of life to handle that even without using RAM disk/cache.

Though it should be noted that, at least in Windows, its really easy to throw spare RAM (32GB or better, 40GB+ is good) at PrimoCache to make a RAM cache to reduce mad max temp2 SSD wear.

If I were to do it again of course I would just plot in RAM, but I managed to get these drives from eBay a week before the big Chia boom doubled prices ($300/$220) so it was not worth spending more money on a RAM upgrade, especially since I already had 64GB in the system for hosting VM’s.

Last I saw you could still get 32GB 2666MHz DDR4 desktop-grade budget RAM sticks for ~$140/each, so for a full 128GB that comes out to almost as much as I paid for my enterprise SSD’s.

On the other hand, with mad max’s much lower temp1/temp2 plotter space requirements while plotting sequentially, you also do not need dual 1.8TB+ SSDs, you can probably do fine with smaller ~1TB or less ones with decent PBW lifespan, which also brings the cost down. So it still kinda depends on how large a farm you plan whether you should spend e.g. ~$600 on a 128GB DDR4 RAM set, ~$300-400 on a set of ~1TB PBW enterprise SSD’s, or dig for some older server hardware that might plot plenty fast with mad max for even cheaper

Oh also I just realized you were specifying PCIe NVMe’s; mine are all U.2 variety with U.2 → M.2 adapters. Tbh I probably would not go for PCIe drives because I am using my PCIe slots for GPUs to mine ETH, for SATA expansion card, etc…

Those are awesome. I saw them early on but I’m using windows and am already sata saturated. Great TBW of 7.3PB !!

Nope :crazy_face:

4x Kioxia CM6-V U.3, 3.2 TB, 3DWPD - the server was meant for something different, so basically: I just had them.

I am always astouned to see people plotting on consumer grade nvmes. Out of fun I did test runs on my OS nvmes (980 Pro) - it’s plain sick how fast they wear down. ~ 20 TB plotted for testing and by now I am at 136 TBW (from 600). All that, while normal operations should be more like 5 TBW by now.

Whereas all Kioxia drives (RAID 0) are at 1% wear now… Quite some from 17.52 PB TBW.

Lets assume they were at 17.52 PB TBW :wink:

I am still not decided yet on madmax plotter. Besides an error my system faces, it seems that my plotting already was quite optimized. Still my assumption is, that madmax + classical plotting may or probably will increase plot speed. Basically it is about timing: to fill low cpu loads of madmax with classical plots. If madmax or other plotters aren’t way more efficient in computing, the gains will be miniscule.

Bottom line: my bottleneck is the CPU at 90%+ load also due to the enterprise grade drives, while plotting. Nothing else.


I’ve got 3 x Intel 3710 in raid 0 with 128GB ram, my original 2 x 2TB 520 Firecuda’s are down to just over 80%, so still have quite a good resale value - must get them advertised.

I should have gotten the Cuda’s. My 970 EVO bit the dust at 65% life !

1 Like

At least it should be covered by its warranty still. The Firecuda’s have a fantastic TBW, but once the Cache is full they slow down a lot, only found this out quite a while after I bought them.

One of the many reasons on why Enterprise NVMEs make sense. And at normal pricing (way back) a great enterprise nvme was even cheaper than that consumer bs ;).

Its certainly been a learning curve, and an expensive one, but at least I have 6.25 XCH to show for it, still a long way to go for ROI, but I’m holding. My bothers just picked up some Hitachi SAS SSD’s, raid them and they have massive TBW and good performance. Prices of all things required for a good Chia setup have certainly risen a lot, and unfortunately availability in the UK is not as good as the US, so UK prices are higher.

1 Like

FYI - I setup a small 60mm fan right on top of the NVME drive. After a reformat and the addition of the fan the NVME no longer crashes and is plotting and transferring just fine. Apparently it was getting too hot !

1 Like

If you are on Linux, monitoring SSDs the first few times at full operation makes sense:

smartctl -a /dev/nvme0 |grep "Data Units"; smartctl -a /dev/nvme0 |grep "Temperature:"

Lets you check wear and temperature.

I monitored them once installed and saw them approaching 75°C :crazy_face: . My solution: I mounted the NVMEs on 140mm fans at case top intake, thereby even improving final air circulation inside the case.

Now, when things are getting rough (RAID0, 15GB/s+ writes), temperatures almost can’t pass 50°C under full load…

1 Like

I have an SK Hynix, paid 150, has a 750 TBW, and is currently at 2.2 PiB with 18% left.

1 Like

I’m on windows but crystal clear showed the disk running under no load around 47C. I added that fan and under full load it was running around 52C. Letting it run all weekend and will check it tomorrow night to see if it crashed out again. Thanks for the reply !

1 Like

Enterprise SSD drives are ok up to approx 70C after that point the drive will enter a throttling mode to reduce the temperature by reducing throughput (aka performance). This can be a stepped process whereby the drive waits until temp is less than something like 65C to restore the performance. Also if the drives go above something like 75 -80C then the drive will shutdown until the temp drops…

Obviously lower temp is better :slight_smile: but they can operate happily in the 60’s

As for the endurance of enterprise drives , they are rated at their DWPD generally for 5 YRS. a 1DWDP drive is generally regarded as being for Read Intensive workloads while the 3DWPD is write/read intensive and anything >3DWPD is Write intensive.

Enterprise drives also have a lot more over provisioning than regular SSD i.e. spare NAND, which allows the drive to operate at higher performance and endurance…


I 3D printed a SSD holder that can take up to six SSD’s, and has two 40mm fans, which then all fits where the CD drive would normally. Keeps them cool.