Scalable DIY SATA JBOD - Will store over 250 Disks!

Hey all, here’s an update to my post back in May about my DIY farming rig. Here’s an updated pic!

It has grown from 22 drives back then to over 60 drives today with about 20 more to install and space for over 250! I’ve created custom 3d printed rails and sliders for the hard drives so they are easy to slide in and out. Currently I’ve got 60 drives loaded in bottom 3 shelves. Here’s a closeup so you can see the drives have 2 rows per shelf (front and back):

I’ve also upgraded to Wal-Mart’s finest 5-shelf wire rack, although I had to buy 2 of them to get the amount of shelves I needed.

The entire bottom shelf is filled with 16 140mm case fans pushing cool air from the ground up into the drives. I’m still working on wiring them all up, but for now I have a box fan (not pictured) blowing air into them from the side to keep them cool.

All of the drives are powered from the 1200 watt server power supply you see on the shelf next to the monitor. I’m pretty sure that 60 drives is the limit, as I’ve watched it’s main voltage output drop from 12.2v to 12v. I’ve got 2 more of these on the way and I’ll probably just power 50 drives on each to be safe.

The big metal box you see behind the monitor is a Dell PowerEdge r520 with dual 8-core Xeon chips and 32gb ram. I got 2 of these for a great deal at a public auction and they are perfect for this. It has 4 16x PCI-e slots. I currently have 2 6 port SATA cards plugged in. Each of these ports is connected to a 1-to-5 port multiplier that hangs behind the disks on each shelf for a total of 60 SATA drives. I’ve got 2 more PCI-e slots, so I can get 60 more drives in this server. But I also just received this new 12 port card and expect to test it soon. That would get me to a theoretical limit of 240 drives controlled by a single server!

I have a few more mounts I need to design and 3d print, but once I’m finished with that I plan on making the 3d printed files available for sale as well as the full design docs and parts lists as a digital download. Would you be interested in such a product and if so, how much would you expect to pay for it? Any other feedback, ideas, suggestions, criticisms, safety warnings?

20 Likes

I’d be interested in design files, because this looks nice and scalable, more so than my approach with disks bolted to 19" rack shelves… looking to see if I can get similar wire shelf units in the UK now.

However, the files would have to be cheap enough that I didn’t think it was better to just reverse engineer what you’ve done from those pics, plus dimensions of whatever unit I can find might not be the same, etc.

1 Like

Yep agreed! The main value would be in the 3d printed files, and the secondary value would be a nice concise list of exactly which splitters/extenders/power supplies/12v to 5v converters you need along with pics and wiring instructions. It’s taken me a few months to get to this point and I’ve learned a LOT along the way! :slight_smile:

2 Likes

This AmazonBasics unit looks similar (although the “wires” are oriented differently):

A great looking rig! Congratulations.

I agree: I’d be interested to see and maybe purchase the 3D files. The vibration dampening looks interesting. I’m not sure how much I would pay for the 3D files… USD$10? $20? I don’t know how much these things go for.

Thank you for sharing!

1 Like

splitters/extenders/power supplies/12v to 5v converters would be great to know how he did it

Cool setup. However, IMO the biggest issue is managing power cables / power supplies. I flat out don’t trust molex connectors.

2 Likes

Why not? Molex connectors have big pins, they are well spaced, the cables they are on are typically thick, they were designed in a time when peripheral power requirements were higher.

SATA connectors are much more worrying IMO - small pins, close together, much easier to pull them out partially.

I’d trust a molex splitter over a SATA splitter any day.

I do 2 different things for my power connections FYI, first up I have multiple NZXT C-series modular power supplies, some of them are dedicated to disks, and the others are in plotters but since I don’t need any of the molex connectors in the plotters, I have all of the molex cables on the disk PSUs (4 cables per PSU, 3 molex connectors on each cable), I then split each of those exactly once with a 3 way splitter, so I get 36 molex power connections for each PSU, these then connect to cables like this:

However, some of my disks are on scavenged Supermicro backplanes (since they take molex power input), but when I connect to those I don’t have a splitter in the path, since that kind of backplane is already basically a molex to SATA splitter.

If you really don’t like molex, that same principal would work with modular PSU SATA cables, just buy multiple modular PSUs of the same model or in the same range, and you can put all of the SATA power cables onto the same PSU, I could get 16x SATA connectors on one of these C-series PSUs without any splitters.

I’m wondering if one day you’ll hear an almighty crash, and it would be the wire shelves buckling under weight of all those hard drives.

Sometimes I forget just how heavy a computer gets when its full of HDDs. Not sure if I would trust wire-shelving for this. Just thinking about it makes my anxiety go up lol.

UPDATE: Nevermind, just saw some specs and they can support a surprising amount of weight.

My first thought was weight as well. I didn’t read all the comments if that was brought up but I would focus on vibration and grounding. The drives themselves will be grounded through the power cables but hundred of drives vibrating in insulated carriages could turn that rack into a serious static electricity generator. I’d ground the rack and do anything I could to help dampen the drives from each other’s vibrations. Maybe cheap weather stripping? Seems like a lot of unused fans on the bottom shelf, don’t know if cabling is limiting using the whole shelf or not. Nice amount of drives, I’m sure their spin-up power draw is significant if it’s all at the same time.

1 Like

I may have found a dell backplane that just takes 12v power and converts it to 5v on board. Still need to make up an adapter to test it, but could theoretically be powered by a server PSU with a GPU mining breakout board.

Why do I see no cooling? LOL.

If I’m not mistaken you have experience building GPU mining rigs :smiley:

Thank you! Good eye on the vibration dampening - I spent quite a bit of time in CAD trying to figure out plastic “springs.” Not only for vibration dampening, but because the wire shelves have about 2-5mm deflection between the edges and the middle. They aren’t perfectly flat - the spacing between shelves varies by a tiny amount, so without the springs, the middle drives would be loose and the drives near the edge would be tight. With the springs, all the drives have a nice tension. I have no way to test if they actually provide enough vibration dampening, but my unscientific “touch test” feels pretty good. If I just hold my finger against the wire shelf, I can feel the hum of the hard drives and I can just barely feel the clicking of the drives. Each wire shelf “feels” different - in other words if I touch two shelves at once with both hands, I feel different tiny vibrations. I take this to mean that the shelves are doing a good job of at least isolating vibration from each other. But again, I have no way of testing this to know for sure. I figured some dampening was better than none! :slight_smile:

At a high level: 12 volts comes out of the server power supply only - they don’t have 5v rails. Your SATA power connector needs both 12v and 5v. So 12v comes out of PS and goes into SATA connector AND a 12v-to-5v converter. The output of 5v converter also goes into end of SATA power cable. I use the long “strip style” SATA power connector for this that has 4 connectors. This strip goes into another strip of 4 connectors for a total of 8 connectors. Then I have a Y splitter on 5 of those connectors for a total of 10 end-points. Each 12v line from the power supply eventually powers 10 drives. I currently have 6 12v lines coming out of the power supply, but like I said I will probably drop that to 5 to be safe. This keeps me under the max amperage ratings for the wires and 12v-to-5v converters.

I know there is a lot of debate over this. I just jumped in and did my own testing lol. All connectors seem secure, none are even warm to the touch, the wires are all cool and my math says I’m under the limits. I’ve been running all these splitters for a few months now so I’m pretty comfortable with them.

Yes! Surprised me too. It’s amazing what a small amount of steel can hold! These things are rock-solid though. Hopefully this is obvious, but you’d definitely want to build from the bottom up. I feel like building from the top-down would definitely result in a mighty crash lol.

See my other comments about vibration dampening, but the static electricity is a great point. I haven’t been shocked yet but grounding the whole rack can’t hurt! Great idea, thanks!

The bottom shelf holds sixteen 140mm case fans (visible if you zoom in) but they aren’t currently wired up yet. I’m working on getting 12v down there. For now there is a box fan that isn’t in the picture blowing air into the side of the drives. They all stay under 40 C, no problems there!

A bit, yes! In my original linked post from back in May, you’ll see 8 GPU’s at the top of the wire rack. Those have since been moved to their own rack - I don’t have enough power on my circuit to have them both in the same room! :rofl:

2 Likes

Here’s some more info on the server setup:

  • Dual 8-core E5-2470 Xeon, 32gb RAM
  • 240gb SSD for OS, three 480gb SSDs in RAID array for virtual machines
  • Two 6-port SATA PCI-e add-in cards
  • Windows 2019 Datacenter running Hyper-V
  • Seven virtual machines, all running Win 2019 Datacenter
  • Chia plus 5 forks (Chaingreen, Spare, Flax, Seno, Goji) as well as Storj
  • All virtual machines are farming the drives simultaneously via a network share on the host - keep in mind that it is all through Hyper-V so it isn’t really a “network” - access is very fast!
3 Likes

Totally missed that. Though, I fear, the fresh air will not arrive at Level 2 or Level 3. I would install fans in the Front or back… There are 200mm Fan Options on the market.

However, this is of course thought for silent systems. I use the 200mm in my external radiator. Overall costs for fan could still be cheaper if you go for the 200mm instead of the 140mm. But yeah, also depends on the measures of the area you want to place them. No use if you can only cover x % with 200m fans.

What’s the HDD models you use? Looks like savaged OEM drives. Wouldn’t it be cheaper, at least in mid run, to replace them all with 18 TB drives to save space and costs of cables etc.? Also eletricity bills is almost the same for 8 TB and 18 TB. Where are you located and what are electricity costs per kWh there?

2 Likes

You can probably find a grounding wire with a 1megaohm resister built in so that it’s dissipative rather than allowing fast discharge to ground. Gamers nexus just did a great video on ESD. I use to work in electronic manufacturing and anything that moved had some form of grounding.
ESD VIDEO

1 Like

I’ve heard this argument a lot, but when I actually do the math, it doesn’t actually hold water (unless I’m doing the math wrong).

These are all mostly 4tb enterprise drives that I got for $10/tb. I buy them 20 at a time for $800 USD - and no, I won’t give away that source lol. The 18tb drives are still going for about $450, or about $25/tb - over ten times the cost for a little over four times the space.

My electricity costs are around 7 cents per kwh. From my measurements, a drive takes about 10w on average during use. That means a single drive costs me about 7 cents every four days, or about $6.50 per year per drive. Even if I used one 18tb drive to replace five 4tb drives, I’d have to run that 18tb drive for years to see a return over the 4tb drives because of the much higher cost per terabyte. Make sense? Check my math - I might be wrong…

This math is exactly why I decided to scale up the number of smaller drives instead of using a smaller number of larger drives. I also have the really nice advantage of slowly upgrading the smaller drives with larger drives as they cost per terabyte falls. The hard part is scaling up a system to run so many drives - but upgrading drives to bigger sizes will be quite easy!

I’ve got the “CPD” (“cost per drive” to connect a drive to the JBOD) down to about $10, which includes connectors, a portion of the add-in cards and 1-to-5 port multipliers and dedicated power supply. Compare this to SAS “cost per drive” to connect and you’ll see this is far cheaper as well! Used SAS cables themselves can be $20+ dollars, but you could get 30 brand new SATA cables for that price! :slight_smile:

4 Likes

Hi, sorry if it is a silly newbie question but what is that about farming same plots from 7 vms simultaneously? Can you explain a little bit or guiding me on the right direction to research? Thanks.

Sure! The base server runs Windows Server 2019 Datacenter edition, which is is unique in that it can host an unlimited number of guest VM’s running Windows Server 2019 Datacenter, all under the same license key. You can get an official license key on eBay from authorized sellers for relatively cheap - I think I got mine for $50 USD or less.

Then you just start up Hyper-V and start installing the VM’s! In my case, I have 16 cores/32 threads available, so I give each VM four virtual cores. I set the memory to “Dynamic” so the host can allocate RAM to the VMs as they need it. This is how I’m able to run all those VMs in 32gb of RAM (although I have more on the way).

Obviously you want to host your VHDX (virtual hard disk) files on your fastest drives - NVME/SSD preferred.

Then, on the host, I share the Chia plot drives as a read-only network share. All the VMs are on the same network so they see the share, but when they communicate with it, they are just talking to the host OS, so it isn’t really going over the network and access is very fast.

This keeps all the forks completely separate in their own container and they can’t hurt the plots cause they are read-only.

2 Likes