Is multi-harvester better than one harvester with disks connected by CIFS/NFS

Hi! I have different servers which share disks by CIFS/NFS to the main server.
I wonder if plots checking by main server is in parallel when farming.

for example , 6 plots checked in 5s. Does 6 plots checking is in parallel? Will It be better by setting up multi-harvester?

6 plots were eligible for farming 28303e7e6a... Found 0 proofs. Time: 5.45674 s. Total 3118 plots

Yes – multi-harvester is gonna be better in general.

5 seconds harvesting checks aren’t great but the absolute limit is 30 seconds. So as you add more servers / nodes those variables will add up, and times will go up. Eventually you will be over 30 seconds and lose out on rewards.


@codinghorror I have a multi harvester-plotter setup.
3 harvesters which are also plotting and 1 main node. My question is that is every signage point served to ALL harvesters in the network of the farmer?
Otherwise I would consolidate all my plots with 1 harvest.
I am running things on the cloud.

I don’t know. I don’t have enough experience to answer that question. It is kinda frowned upon to run multiple farmers on the same network, but it does work – and we’re referring to harvesters only here, which should be fine. I don’t know enough about harvester-only setup.

You are right, no need of multiple farmers, 1 is enough for 1 network.
My setup is multi-harvester. For some reason I cannot shake off the feeling that each signage point is not checked from ALL harvesters. If only I can verify from the logs but they just show a partial fingerprint of the signange point! :frowning:

As has been said, you’ll get the latency blips and spikes in challenge responses with NFS/CIFS mounts … I think I’ve read somewhere that’s not a supported, or maybe ‘recommended’ or something setup. With the total lack of information and confirmation of things actually working right on the main node with harvesters connected though, I do BOTH - my main node has everything connected via CIFS/NFS AND the nodes those plots reside on are running harvesters. I can’t imagine that is going to break anything, and it makes me a hell of a lot more comfortable until we get some solid confirmation and feedback in the UI that harvesters are connected and working properly. It also still affords me the ability to turn on my backup main node if I need to reboot my primary one, and have all my harvesters and 8444 forwarding from the router fail over to it when I bring the main node down (everything is pointed at an haproxy container running in my VM hypervisor cluster).

1 Like