Is this enough for supporting many disks?

I’m trying to build a server for the minimal costs. Considering performance is not the most important factor how low can I go for components cost, and still keep the raid controllers working?

You can find some server bays for fairly cheap that can hold between 24 to 60 HDDs. (think the smaller ones are easier to work with, and only cost a bit more) Can save even more by 3D printing one. So storing them is taken care of. Might add Noctua fans since I have to live with it.

I plan on running them off of this. Rog Crosshair VIII Hero with an
Athlon 3000g could I at least get 40 disks with two of these

I’m guessing ideally I’d want a board capable of running a couple SAS to SATA cards, and get 32 SATA ports per card. Probably require ecc on that system too. Keeping the system reasonable for rebuilding raid arrays. I’d like to experiment with ZFS, and see if it helps more plots survive till they can be transferred to SSDs in a few years.

1 Like

That all sounds fine, but durability should not be a concern – you want to max your capacity, and anything that eats away at capacity (parity bits) works against that :wink:

1 Like

The only reason I’m thinking of using ZFS is from data storage guys, supposedly, a lot of disks fail at about the sametime. So, I’m expecting to lose a few HDDs within a week of each other. Though, if resilvering takes a week or two, then I risk way more data, and I could replace it in less time. I just wish I knew which filesystem to use for data integrity for this use case. I think it would be ideal to have as much of the farm survive till large capacity SSDs achieve price parity with HDDs.

My understanding is single disk ZFS might put data at greater risk than just using NTFS. NTFS is supposedly better at deleting bad data, while ZFS will just die if it doesn’t have redundancy on another disk. But, ZFS can use ecc ram.

Man, plot storage is the easy part. I wouldn’t get too crazy on the details there. Personally, I’d just focus on storing your plots on filesystems that are easy to mount on any OS in case you need to move physical disks around. If you’re frequently losing new (or newish) HDDs that you’re only using for plot storage, there’s a different problem.

1 Like

Filling 60 disks (of say 8TB each, maybe?) is no mean feat. I would allocate a good chunk of you budget to NVMe drives for temp storage on your plotters.

I wouldn’t worry about disk failure too much, and skip the RAID. The harvesters don’t read that much off the disks anyway, and the disks gets written only once.

If one of the 60 disks does fail, meh… you have 2% fewer plots. You can make those again.

ECC seems like an unnecessary cost too?


This doesn’t write back at all after having to access some data from the farm? At this point I’m just thinking use a filesystem that can support ecc if there’s any reason to worry about data being corrupted while in ram on the harvester. I’m also able to mount the file system as read only on the harvester for this?

Don’t think this takes much in terms of cpu. I just don’t want a bottleneck on the cpu for keeping this up. How strong of a cpu should I go with for supporting say 60 to 100 16 TB drives?