Show me how you are moving your plots multi pc setup

Nice network, are the node/plotter connected by 1gbps to the search?

Not sure what you mean when you say “to the search”. But the entire network is 1gbps.

of the 5 machines i have on chia, one of them is a plotter that then moves completed plots off to a harvester. both machines have dual 10gig cards. The ports bonded on both machines and are configured in balance-alb mode. I have a 10gig switch between them.

@WolfGT what is the program you’re using to generate your network layout.

@cojarbi I don’t plot much now that I’m busy working and trying to build a pool, but I used my main rig, which is Linux. I would plot on a bunch of ssds and move them to an external drive for the final destination. I have a few internal hard drives I connected via hot swap to the rig. After they were full I would just move them to the farm and hook them up. #SneakerNet

Typo. 1Gbps, aren’t you bottlenecking or you have a good distribution of plots so they stay in the switch “vlan”

Nice, that’s a more common approach which works when you have only one doing everything. Trying to find out what’s the more efficient way to connect and not bottleneck multiple PCs

1 Like

Interesting , what transfer speed do you see? What nic and switch are you using

Unifi Switch XG 16

and i have no idea on speed, but she’s fast enough for this old man.

most of my plots reside on my two netapp filers, but the ones on other machines are just moved over my local network. I run a script before i go to sleep each night that copies the files then deletes them from the harvester machines. during the day the main node farmer is pointed at those drives and computers. Not the most efficient but until i have time to build it out better, it is what it is. most of my plots come from the 3 servers connected directly to the netapp filers.

as for NIC’s, i always pay up for Intel X540-T2’s or Intel X710-DA2’s.

Even though the XG 16 supports fiber/DAC, I’ve never had any luck with those sub-$50 HP takeoffs from ebay. They always overheat on me.

I also do not use fiber. I always go DAC or 10Gbase-T Ethernet. Doesn’t matter so much for chia, but lawrencesystems did an excellent video on youtube about transfer rates, and for large batches of small packets, the latency is much lower on copper. All of my runs are in the same rack, so I don’t need the distance fiber offers.

2 Likes

How are they connected physically?

No, the NAS’s have 4 ports per device. So each plotter has it’s own network port that it copies through and the harvester has its own port also. So there is no bottleneck. Just keep the traffic separated and its fine. No need for VLANs. Just map the plotters/harvesters to a specific IP and it keeps the traffic where it needs to be.

1 Like

In Google Drive I use a tool called drawio. Basically an online version of something like visio. But free.

Single full node/farm/harvester. Single plotter. I migrate the plots from plotter (my workstation) with a NAS acting as an intermediary.

Plots are created on 13 x U.2 NVMe DC SSDs with a RAM cache sat in front of each SSD to ease the burstiness of writes, then placed on a single NVMe DC SSD before being moved off to the NAS via 20Gbit copper.

Then the plots are pulled from the NAS by the farmer to a local SATA DC SSD on the farmer over 10Gbit copper, which are then finally placed on the spinning rust HDDs.

Couple of PowerShell scripts running on plotter and farmer kicked off by Task Scheduler keeps everything ticking over. Kicking off new plots automatically every 20 minutes up to the prescribed capacity of the plotter (I cap the parallel plots around 50 because I still want to work or play games), migrating plots from plotter to NAS, migrating plots from NAS to farmer, trimming the SSDs, checking SSD and HDD drive temps, placing plots on HDDs to maximize capacity and so forth.

Another script queries the node, farm, wallet and netspace and ensures everything is running correctly and displays that information on our home’s dashboard. Another script handles restarting of any parts of the full node/farmer/harvester/wallet if anything goes awry within a minute or so, or if there is a power event, e.g. an accidental reboot so that everything restarts correctly.

I might move to a multi-harvester setup in the future, but proof times on the farmer are still very much within a reasonable time window.

If you are looking for something new and different, try Power Automate Desktop. Here’s my “flow” that moves completed plots over the network to the drive on the farmer that has the most available space. This way multiple plotters aren’t all writing to the same destination disk at once.

I move mine over the network to a shared folder on the drive I’m filling.

I have a NAS/server with 2 standard ethernets connected and 1 is connected into my internet router and the other port is connected to my plotter through a switch. The NAS/server is passing internet to the switch and thus the plotter. The NAS/server is Windows 2019. I am simply farming on the NAS/server and plotting on another system and passing all plots to a temp storage drive on the NAS/server via the plotting process before manually moving them to the farming RAID drive. Ultimately, this works for me because I can farm on 1 system and plot on the other at the same time with minimal loss in speed and time.

I use plotman which deals with the scheduling of plots and rsyncing them to their eventual destination, my 3 plotters each have a 1TB SSD which is the destination dir for plots, but that’s just a staging location, they are eventually rsynced (by plotman) over a dedicated gigabit network to their permanent location which is a possibly overkill EPYC 7551 mainnode/harvester/farmer with 16 hotswap bays, and 5 LSI SAS cards with SFF8088 connectors going to what I guess would be called a DAS array - that setup will scale up to 96 attached SAS disks before I need to think about adding NAS or remote harvesters.

When I plotted/farmed on the same machine, I still used an SSD for the initial destination, but had a systemd timer set up to move plots over to (local) SATA/SAS HDDs, with either approach the key is making sure that holding SSD is big enough to hold all the incoming plots before they’re rsync’ed away, and that depends on staggering, speed of network, etc - I didn’t do the math, I just stuck a spare 1TB SSD in and it seemed to work, so repeated that on the other plotters.

I use a case that holds 23 drives and use them as harvesters. As they fill, they save plots over SMB 10G to another harvester.

If you have multiple systems you should be using linux.

I just transfer the plots over the network to my server. My server has drive shares set up. If done properly, this is faster than USB. The best way to do it, and only if plotting speed is fast enough, is to fill an entire drive and move it to it’s farming location.

1 Like