Farmer Hardware Scaling Advice?

Beginner farmer here. I am at the point where I’m running out of physical space to maintain the current setup.

Current system: a former main PC/gaming rig.

  • Core i7 8700 on Z390, 32GB RAM, M.2 boot drive, 6x SATA slots
  • 6 PCIE 3.0 expansion slots (one 16x, one 8x, four 1x)
  • I have 2 pcie 2.0 1x expansion cards that have 4 SATA ports each
  • Full ATX chassis. Standard HDD caddies have 4x WD Enterprise 10TB + 1 WD Purple 18TB harvesting
    • one 18 WD Purple 18TB harvesting, zip-tied…
  • Extra fans to push-pull air onto HDDs for 35-41C temperatures
  • I have 3 extra 5.25 empty slots in it (4 with some pliers/filing job)
  • Corsair 850W PSU
  • Wi-Fi AC connection

UPS: a 900W rated, can work as standalone or rack-mountable (2U). The farmer has auto-boot and farm startup script automatically on power loss/restore.
Current rig with 6 HDDs consumes ~70-80W

Plotter: a high-end PC that makes 1 plot every 27 min. Can support up to 2 HHD via SATA connection. Runs a harvester and connects to the farmer as a peer.
Extra equipment: usb 3.1 gen 2 HDD enclosure that can maintain 2x 3.5" HDDs (currently unused)

Right now I have extra 4 WD Gold 10TB empty drives plotting. I will keep 2 of them on the plotter and 2 others will be eventually connected to the farmer machine via USB to harvest.

I ordered a HDD cage on Amazon to put in 5 HDDs into the 5.25" slots and fix the zip-tie situation… This will enclose all of the existing HDDs into the rig chassis. As a stretch, I can still use 2 extra through USB enclosure, not sure about stability and cooling capability though.

The rig is located in a residential space currently using standard 15A wall outlet. A dedicated room, where nobody lives (but if someone needs to sleep there, the rig produces too much noise and will need to be moved).

So what next? The motherboard and processor can certainly support more expansion slots – HBA cards, backplanes, potentially could be rebuilt into a 4U chassis… Or a test bench on a Costco/Wallmart shelving?

What are going to be new physical constraints beyond that? 15A outlets? Noise? Vibration? Dust? Internet upload speed?

also watch the power rating on the PSU, hdd’s need a bit of 5V (0,5-1,5A) and that will run out before the 12V rail does.

Well, I saw a guy on YT do something that made financial sense to me… He had all his external storage in Western Digital USB drives. He had the drives plugged into a multi-port USB HUB that was connected to a MicroPC.

I’m using a Raspberry pi and it has two USB 3.0 ports (blue?) so it could handle up to two of these USB hubs…

I think the USB hub he was using had 18 powered ports with the HUB obviously having it’s own power supply.

I’m going to do exactly that, as I had looked at buying old storage servers, NASes, or a secondary full tower case with space up to 18-20 drives internally (like LinusTechTips). All of those cost much more than than the HUB solution.

I’m currently at the stage where I have 5*14TB external USB drives plugged into a HUB and the HUB plugged into the Raspberry Pi. If I ever get to the point where I max out two HUBS worth of HDD space, I can always just get another Raspberry Pi and repeat.

The only down side to this is the spaghetti nest of wires and power bars. The Wife acceptance factor is lower with this solution… but it is cheaper…

2 Likes

yeah you need a bit of space for this, good cable management skills, and most importantly good sweet talking skills :rofl:

One relatively simple solution, at least for a while, is to get rid of your low TB drives and replace with 14/16/18tb drives. Same for using USB drives. Higher TB drives take approx same power/space, so I don’t see the point in getting anything less than 14TB.

Also, I find USB drives easier to manage/add to/ relocate when needed, and leave the PC chassis for HD later if required. I have 10 USB at the moment with space for add’l, no hubs required.

You will run into a bottleneck with this solution eventually. I won’t repeat what is already said but read

Long story short, some people can get about 20 drives depending upon the hub and some can get closer to 30. If all drives are the same, reads and writes are divided evenly (sort of) among the number of active drives and thus performance is slower.

Is there an easy way to find out how long the proofs or filters take by looking at the log? I’m curious how my system is performing. Chiadog is reporting a lot of proofs passed, and wallet activities… so I’m guessing it’s okay so far…


This is what I always look at, the yellow highlight in the pic.
I am under the assumption you want that number as low as possible. Under 0.5 seconds is what I have seen is considered good.

While being mindful of price per TB so not to be a willing participant in price gauging. There’s also SANS Digital refurbished Shop and the like (REFURBISHED).

1 Like

If you do just the farmer mod to breakout only those logs, you’ll have just the info you want to tell u how your times are.

Situation update.

I managed to grab some new hardware and rebuild the farmer:
MB and CPU stayed the same: Intel Z390 + i7-8700 (6c12t + intel graphics)
New chassis: Rosewill 4412U - a 4U rackmount case with 12 SATA bays
New memory: Team T-Force Vulcan Z DDR4 3200 CL16 128GB (4 x 32GB)
New SSD: 2x Intel S4610 480GB in RAID0
New HBA card: SFF 8644 2 ports pcie 3.0x8
New 4U 24-bay JBOD (SFF8644)
Got a used 42U rack, not using it yet, need to optimize space in the garage

The farmer chassis is now holding 8x 10TB + 4x 18TB HDDs full of plots
The JBOD is holding some 18TB HDDS, some of them full, some still plotting

Plotting speed: 59 min MadMax with RamDisk using 5 threads of farmer CPU, using RAID0 S4610 as temp1
I have a separate plotting machine doing 25-26 min/plot and harvesting in the meantime. I just stick full HDD into JBOD when done

The farmer, while plotting at 5 threads, barely has any stales. In fact the best improvement in terms of stales was changing internet from cable to optic fiber.

So what’s next? Just scale JBODS through daisy chain and add disks? My current farmer uses almost all PCIE lanes. The i7-8700 is rated for only 16 lanes. I have 8x HBA card + 4x boot SSD + 1x SATA expander + 1 SATA expander = 14 lanes. Pretty much at capacity

Any advice appreciated

Here is the dark side of chia farming mistakes: discarded hardware.

  • 4x HDD mounting kits (5.25 to 3.5)
  • A dozen of unused SATA cables
  • 4x HDD enclosure (pretty good btw, paid a lot and used just a little)
  • Old tower case (probably just a recycle waste)
  • Regrets about buying 8x 10GB HDDs at extremely high price. Short term affects ROI, long term affects electricity/TB ratio
  • A bunch of external enclosures and cables, now e-waste
  • 2x overpriced consumer SSD burned during early days on OG plots using vanilla plotter. One of them repurposed, the other is almost dead, serving as a diskette
1 Like

I dont know how much you can attach to a single hba. from what i have seen you can attach a boatload of hdd’s to a hba but i dont really know how.

If you are running out of pcie lanes on your farmer you might want to get yourself a secondhand dual xeon system, those often have a whole lot more pcie lanes than desktop setup.

Other option would be to just add another system with disks as a harvester