Purchasing a JBOD

I’m looking at purchasing a JBOD, maybe something like this HP StorageWorks D2700 SFF Disk Enclosure

Seems pretty reasonable.

What else would I need to buy to get this working?

Is it just a case of loading it with drives, plugging in and registering on the network

Not sure if it’s the same where you are, but where I am SFF disks (2.5" rather than 3.5") are a lot more expensive / TB.

With any Disk enclosure… In the server industry, they are called “disk shelves” or “enclosures”, you are likely to need a server or PC and a external SCSI interface card. You will need at least

HP 633539-001 The interface card to connect the storage enclosure to the PC or server
HP 408765-001 Mini SAS cable to connect the PC/server controller card to enclosure

You are aware that the storage enclosure accepts only accepts SAS drives?

1 Like

Goes to show how little I know about this sort of thing!

No offense, but do your research and then get a standard server instead like a Dell R720. They are cheap and all over the place at the moment. That will do what you want.

1 Like

Total n00b here. Can you get enclosures of this grade that will accept sata drives?

no. You are better off spending the same amount of cash and just getting a server. Like I mentioned, something like a Dell R720 will take SATA drives.

SATA disks do work on a SAS backplane, but you’d still need a controller card (or a server motherboard with on board SAS) - that is not a NAS, and you’d need to buy small form factor disks for those bays. If it wasn’t small-form factor it would be a good deal, assuming it comes with drive caddies, etc (and assuming you have controllers for the host computer).

That is a Storage Enclosure, not a NAS. There is no way just to link it to your network. The port you see on the back that looks like an ethernet port is a diagnostic port. To run an enclosure like that you need a board in your system to link to it. Then the computer manages the enclosure.

What you are looking for is a NAS.

Thanks, you’ve answered my question. I’ll look at getting something like that. I’m planning to sell my plotting hardware to recover some of my costs but want to keep farming. Looks like this will be the best way to go.

I’m planning a large set of 2.5" hdds attached to usb powered hubs. I’ve got 72TB of SAS drives to fill first, then on to the usbs.

Few brain-dump ideas for you:

  1. JBOD is an box that effectively holds (and powers) hard drives en-mass. No RAID, nothing fancy, just allows all the drives to hook into power and SAS/SATA interface and then pass that through to the outbound ports on the backend.
  2. There are 2 kinds of mass storage spindle drive interfaces: SATA and SAS. SATA you have in all your desktops and are the most common. SAS are used more enterprise side. There are cables/adapters to adapt one to the other pretty seamlessly.
  3. The connectivity coming out of the JBOD and into the host PC all tend to use a special connection/cable combo that I had not encountered before and I believe is common in the data center space: SFF-8088 - they don’t run much longer than 3’ and are really dense/stiff cables with some heft to them :slight_smile:
  4. To connect those JBODs to your host PC, you invariably have to stick a special PCIe card in the host PC that accepts those connections… LSI is one of the most popular brands - you can find these cards all over eBay. I did a writeup of how to get one working on Ubuntu here.
  5. JBODs don’t provide any RAID capability - they just pass through the disks straight to the host system AS IF they were all connected to the same motherboard.
  6. You can think of a true JBOD like an external hard drive you buy at Best Buy… it’s as useless as that unless you plug it into a host machine that can access it and do something useful with it. In the external hard drive case, it’s always USB… in the JBOD case it’s via those SFF 8088 connections…
  7. Those SF 8088 connections can host up to N drives per connection (I think it’s like 4 or sometihng) - so in enclosures that are holding 12, 24, etc. drives you may see 2, 3, 4 or more SFF 8088 connections on the back of a JBOD… in those cases IF your JBOD is full - you actually need to run cables from ALL those connections into a card or cards (plural) in the host machine so all the drives can be seen and accessed. It’s not just 1 connection.
  8. Only consider a JBOD if you are trying to get 8+ drives online… if it’s less than that, just buy a big PC case and stuff all the drives in that and hook them up to your motherboard OR a PCIe SATA extended card or something like that.

I think those high level things should help you out when shopping for one.

5 Likes

This is not true. You can have up to 128 devices connected over a single 4 lane MiniSAS cable.

4x6GBit/s SAS means 3 GByte/s for all drives are shared. You can hook up another MiniSAS to have more bandwidth or a backup connection.

1 Like

Oh!

Could I have just made 1 connection from my 12 bay to host machine?

I sort of figured since it had 3 connections I should hook them all up.

I am new to this all as well (obviously)

Properly set up, you can at least hook up 2 connections and get a “double wide” 8 lane connection, which will be utilized. I’m not sure about 3. There’s also the case that enclosures (which have internal SAS expanders, as it likely does if there’s 3 connections) sometimes have what are called subtractive ports - so sometimes you have to pick the “correct” two (out of three) ports. I’m not sure if all 3 are active, I’ve never seen that recommended, it may or may not actually utilize all 3. You’d probably have to use some utilities to see how many lanes it’s actually utilizing.

2 Likes

I do see where you’ve decided to go in another direction, but just in case you circle back around to the D2700 or similar… At the price they have listed and for what I’ve compared pricing on for other refurbished same brand / model units, that chassis may not come with the drive trays (that’s normal), power supplies, fan modules, or I/O modules. Basically just an empty shell. I don’t see the info on the page as to what’s included outside of disks not being and the photos look very stock. Best to confirm what’s really included.

The LFF units hold 12 ea. 3.5" drives which where I’m located are a lower cost, when they’re in stock. Also, and this one I’d have to check the docs on again, more LFF enclosures can be daisy chained to each other than SFF enclosures. I think that’s mainly due to the number of disks so it basically evens out space-wise, but cost-wise may likely be different.

As @matthewjbauer kindly provided, cables and Array Controller would also be needed to go into a host (server / workstation) that could access / share the storage. Oh, and those cable types do vary depending on the controller and chassis.

One last thought; with as much effort as it takes to plot, I’ve gone down the path of setting up a RAID array. The last thing I want to happen after I get all these completed is to lose a drive and have to start a large number of plots over again. Each to their own risk tolerance level of course. :slight_smile:

very good point, thanks, lol
I was thinking way too complicated, just found myself a nice Chieftec big tower with 10 x3.5 bay, 6x 5.25 bay. Price: “need to clear space, baby on the way”
Also comes included with a corsair v450W psu (6xsata + 6xmolex), motherboard, cpu and ram

For anyone looking at really cheap JBODs/SAS drive expansion units, think:

  1. Do I want to deal with SAS?
  2. Do I want to deal with configuring HBAs (host bus adapters)?
  3. Do I want to deal with Mini SAS and configuring the adapter and connection between the interface card and the JBOD/SAS expansion unit
  4. Will I fill up the drive bays(will I use all the drives I get fast enough)? Can I get the caddies for drives I get?
  5. If I am contemplating resale, is there a resale value on what I get?
1 Like

I’m just a micro-farmer, so the second option is the best for me…

1 Like