Ha! I have lots to say on this but I will refrain. Let the experts speak and they will guide you towards convenience over expenses. Take this advice or leave it. Until proven otherwise regarding power differences and the deal breaker that “might” be? I don’t pay for things that hold the things that make the money. I pay the lowest possible price per TB that I can find, taking into account the cost of any cables and HBAs. However, keep one thing in mind. It’s undetermined how long lasting those 14tb+ HDD are. I can tell you that the sas drives won’t die. If you care about looks, design, convenience, then I’m not your guy. The storage makes the money. Nothing else does. Who can afford to waste money with these types of ROI? Only people who are already dead and buried in their hardware purchases.
30 disks isnt much more than the 24 in a DS4246, which I would recommend.
Add some power-efficient pc or server and maybe add 1 usb-enclosures for the remaining 6 drives if rly needed.
A rpi-cluster would need 2-4 harvesters plus full node plus all the power supplies for the drives.
Just my two cents,
you know my lil pi.
PS: I dont think theres any kind of drives that are generally much more reliable than others, if ur buying the new stuff. Only older models that have proven to be reliable over time. This includes external wd-usb up to enterprise-sas.
Again, just my two cents
There is no perfect farm, only a perfect farm for you.
Also it depends a lot on if you have plans to grow the farm or not.
There’s quite a bit to take into consideration and you can’t have it all
Convenience (of maintenance)
Personally I’m in the camp of @MisterSavage and I don’t care to spend much money on stuff just to hold my disks. I use some aluminum u-rail to hold the disks, costs about $5 from the hardware store. Holds 10 disks in width of a server rack.
It’s open air with a lot of space between so just a little lazy airflow will keep them under 40C even when the room is at 25
for connecting, you could use 2 of these or go for a HBA + Expander:
Or go for the Cheaper SATA option (I like cheap) but those are not always as reliable as their SAS counterparts.
A single ATX PSU with 25A on the 5V rail should be enough to power 30 drives (If you use HBA’s you can probably set them to staggered spin-up as well).
There is also this, but my worry is the noise level of that 40mm fan:
Or have a look here for option on making your own power cables with added 5V power
I just build myself a new farmer with an i3-6100 that, according to hardwaremonitor, draws about 3-5W when farming. Whole system uses 160-165W at the wall for 22 disks now.
I think I originally had a point to make but have long since forgotten it.
ROI vs. tidy and looking good. I’m from that belief where you crawl before you walk. You walk before you run. If you have money to throw away and ROI is secondary to looking good, then I’m not your guy. With the current ROI I’d say Chia is in the crawl phase. That is, if you care about your capital investment and ROI of course.
Thanks for the link to IMG_2415. The ETH miners were the true pioneers and that is a clean, fantastic design. I’m going slightly different, but similar concept. I’m not sure how heavy duty that persons rack is. Typical industrial made units are heavy duty and that one doesn’t appear to be at that level. Having confidence the the rack is paramount.
You need to be a bit more nuanced in your criticism. Each has their own reasons for how they construct their farm. You seem to be focused on ROI, aka being as ‘cheap’ as possible to further your definition of ROI, regardless.
That goal is certainly fine within bounds, nothing against saving $$. But as to ‘looking good’ as you term it, I would guess for most that do, it is totally NOT about ‘looking good’. Rather, it is about ease of troubleshooting, serviceability, and, perhaps for some, their professional training.
After all, many of us come from IT backgrounds, where we likely learned the hard way, that order, cleanliness, and serviceability go hand in hand. And yes, even profitability, because the world’s business value and demand some things, in the IT world particularly, as best done in a professional way, even if at some extra cost.
I appreciate the expertise around these parts. But I stand by my point that most people here, like everyone, made business decisions based on the $100 value of XCH at the time. Of course the day I can move into server storage solution is a great day. That will come when I get more returns. I’m approaching this from a $30-$40 Chia. If there are IT professionals here then they would know why SAS makes sense on multiple levels. But I don’t hear much about that.
If people take XCH value today and plan accordingly, like from scratch, I’m not sure how server equipment is justifiable. The expense, that is. I just don’t think anyone is speaking from the perspective of planning purchases based on where XCH is right now. There are very few entrants. That should speak volumes. It’s because people look at their ROI and take a hard pass. But I would say that’s because most everyone is offering advice based on what they build/bought at a time when XCH was essentially “to the moon” in comparison to July 2022. That’s my main point. A rack mount would take how long to pay off? Sure, XCH might be headed up soon. Who knows.
I trust the data that show decreasing netspace. That, and the lack of newbies showing up on the only Chia forum looking for building advice. Every indicator shows no new farmers joining Chia. And why is that? It’s because they are likely coming from GPU mining and know all about crunching the numbers and looking at ROI. I’m not here to encourage new entrant that will up the netspace. But I would say if you’re here and curious, then maybe going against conventional wisdom actually can make the numbers work.
well, just because it’s sitting right next to me so it’s not very remote
But yes I suppose I could figure out how to set that up but I’m not very familiar with it and I don’t want to open up my farmer to the world by mistake
Still, on your primary box you could use Virtual Desktops (if you are not using it already), and have the other box always open in such desktop. Switching is easy (Ctrl+Win+L/R arrow and F2 to jump over few desktops).
Yes, that is one more open port on a given box. Although, when you are enabling RDP, you can specify whether only available to local processes, or to the whole world. Also, if you don’t have UPnP enabled on your router, even if you open it up completely, the router will not let it be accessible (I think). Also, if you put a user/passwd on that box, you have another layer to have it protected.
Just try it on some other box to get your feet wet. It is one of the best features on Win (IMO). Also, RDP is using a virtual video card (I use video cards just to install such boxes, then remove the card).
Congratulations, that looks impressive! I just set up my farm (the part I want to keep running at current electricity prices;-) in a small 19 inch rack as well.
Have to look into dust filters too I guess, but also like the blinking leds
I’m thinking about setting it up in a corner of the living room in fall and winter, maybe save a little gas for heating. Don’t know if it will be much but why not if it helps even a little (and I like the blinking leds…).
Farm is running on a NUC (on top) with Ubuntu linux, I manage it by RDP. Firewall with chia ports and 3389 (RDP) open, the RDP port only from local network.
Nice. It will certainly save on heating, mine is in a small room off the hall, and that kept the house much warmer. Also had a 3080 running last winter in an upstairs room, that kept upstairs warm. Our gas bill was a lot lower.