12,000 TBW drive from TeamData

Really just a FYI for folks watching hardware for their plotter setups - I have never owned a TeamData drive and have no experience here.

1 Like

Hmmm, I wonder if this represents a technology improvement or simply an adjustment of the price to lengthen the warranty.

1 Like

Great question… would seem odd that TeamData would somehow figure out a new hardware/chip design that alluded Samsung/Intel up until this moment?

Maybe it’s a controller logic change + a hell of a lot of over-provisioning of chip capacity while still advertising it as a smaller drive? e.g. it’s 2TB of hardware but capped at 1TB usable space?

I’m not that familiar with TeamData - just heard the name as I was building my newest machine for the first time so it’s my miss of they are a tech powerhouse and I just didn’t know.

1 Like

Good point, extreme over provisioning could do it.

That is actually a pretty solid point for cheap plotting as well. Why not buy a cheap 2TB and overprovision it and say do that in a raid and you got a very solid TBW? (Not sure how well overprovision affects TBW so thats the question. is it a linear 100% increase if 50% capacity for overprovision etc?)

Actually, it’s likely what I’ve been trying to say for a while. TBW figures from manufacturers are calculated with “a” Write Amplification Factor. Basically saying that for every TB written by the host, the drive might need to manipulate 5TB on NAND cells on the SSD, that would be a WAP of 5.

(in very short and simplified, if a NAND cell is 16KB and you manipulate 4KB in that cell, it needs to be completely written again giving you a WAF of 4, doing a 16KB write would only have a WAF of 1 though)

This already makes comparing drives from different manufacturers very hard since they most often do not state what their TBW figure is calculated with. That means a 600TBW drive could outlast a 1600TBW drive if one calculated with a WAF of 5 and the other with a WAF of 2. In theory a drive with the same type of memory and the same amount of cells, has the same write endurance, it just depends on how the manufacturer lists it.

As far as I’ve been able to tell Chia manipulates in 64KB blocks which means it should be relatively WAF friendly, much more so then if it would be 4KB manipulations. This means that our current drives should already last many times longer then what most TBW figures will show (since they assume 4KB desktop workloads).

I believe that them introducing a special “Chia” drive, this will certainly be a part of it. Running a different 4KB workload will likely cut that down a lot. This is speculation, but yeah. :slight_smile:

I believe a lot of people are focusing way too much in TBW while steady state drive performance is much much more important to actually be able to reach those figures anyway. A lot of drives will burn through their SLC cache and then drop down to 500MB/sec max with bad IOps because of the internal processes fighting for time and keeping the drive alive and working. A drive with a steady state performance of 2000MB/sec will thus be much MUCH faster running a few plots in parrelel.

4 Likes

For anyone interested in the point @Quindor is making, he provided some awesome data in another thread I created here:

re: TBW isn’t everything - Ordered new Plotter Build - $5168 (UPDATED) - #12 by Quindor

re: Write behavior IS NOT linear - Ordered new Plotter Build - $5168 (UPDATED) - #7 by Quindor

2 Likes

See also

Real world testing has been quite illuminating here.

3 Likes

Appriciate you sharing this @Quindor

It would be interesting to know what the write amplification factor was in the third party testing. The good news for Chia plotters is that it should be very friendly toward the drives so we should expect noticeably longer life than advertised by the manufacturer. In a few months we should start seeing some real world numbers being reported by the Chia community.

Think I’ll go buy some Samsung SSDs.

1 Like

You can NEVER go wrong with Samsung. Never. Never. Never.

(when it comes to storage!)

The fact that Intel/Samsung aren’t selling 12,000 TBW drives may be just because there hasn’t been a market for drives that live that long yet, not because they can’t make them.

Assuming you’re protected against data loss from disk failure and replacement doesn’t cause too much work or downtime, buying a disk that lasts 12 years is a worse deal than buying a disk that lasts 6 years but costs half the price, then buying a new disk 6 years later that’s cheaper, bigger and/or faster.

That’s not a guaranteed assumption - in the enterprise world, you’re not gonna allow for data loss from disk failure, but replacement costs do add up - but on such a long timescale it’s pretty likely. If you’re buying high-end hardware, you’ll keep upgrading it too.

I used to think that, until I got burnt by an Samsung SSD 840/850 firmware async TRIM corruption bug on a database server.

Luckily ZFS was able to keep up with fixing the corruption, so no downtime and no data lost.

The drives are still blacklisted in the Linux kernel for async TRIM:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/ata/libata-core.c?h=v5.12#n3949

SSD firmware is so complex that I’m suprised this doesn’t happen more often.

My lesson learned : never trust any storage, use a filesystem that can guarentee end-to-end integrity (and tell you when something is wrong!).

2 Likes

Yeah early SSD drives (circa 2011) were notoriously unreliable. You can read my old blog entry about it:

Thing is, times change, and those old blocklists may not be useful 10 years in the future…

1 Like