Windows NAS network sharing: NFS vs. SMB

I’m looking at this because I’m desperate; I’m using 5 Terramaster F5-422 devices, all set up in RAID 0 stripe, and regularly running into the hard coded 30 second response time limit for harvester proofs.

This article seems to indicate that NFS is consistently faster than SMB:

And Windows 10 Pro (the OS I am plotting and harvesting on) does support NFS as an addon:

SMB is the default in Windows and I ran the command to check, I’m using the latest and greatest 3.1.1 protocol. Has anyone else in Windows 10 land used NFS to connect to NAS devices and seen perf improvements?

I’m tempted to give it a shot because my NAS definitely supports it, and … well, I’m running out of options here.

OK! I bit the bullet and

  • disabled SMB on the NAS
  • enabled NFS on the NAS (so it is now, by definition, the only method of connecting to this NAS)
  • installed NFS support in Windows 10 (under optional items)

I had to also enable access to the /public folder in Shared Folders under the NFS tab as read/write, this is probably specific to the TerraMaster NAS I’m using, but here it is for completeness:

I was then able to map drive letters to each NAS, and it has to be using NFS by definition, because SMB is disabled on the NAS. I mapped drives from the command line like so

mount -o anon \\\mnt\md0\public z:

repeated for each device, one drive letter per:

Let’s see how this goes versus SMB… the drive shares are definitely live and read-write, I can also test it from my Windows 10 Pro workstation as well as the farmer…


OK I can immediately confirm this is slower… much slower:

2021-04-24T20:44:23.256 harvester chia.harvester.harvester: INFO     11 plots were eligible for farming 083b75ea68... Found 0 proofs. Time: 132.42211 s. Total 3958 plots
2021-04-24T20:44:23.256 harvester chia.harvester.harvester: INFO     7 plots were eligible for farming 083b75ea68... Found 0 proofs. Time: 138.46899 s. Total 3958 plots
2021-04-24T20:44:23.256 harvester chia.harvester.harvester: INFO     8 plots were eligible for farming 083b75ea68... Found 0 proofs. Time: 126.24997 s. Total 3958 plots
2021-04-24T20:44:23.256 harvester chia.harvester.harvester: INFO     4 plots were eligible for farming 083b75ea68... Found 0 proofs. Time: 118.98434 s. Total 3958 plots

That’s 100+ seconds to check proofs. Switching back to SMB… I’ll take the opportunity to reboot the devices while I’m at it just in case.


I can almost guarantee this is the NFS windows driver! I run nfs shares on all my linux boxes (currently 5 systems) and NFS is blazing fast for migrating plots between systems. The NAS may also be a contributing factor because I think NFS is “least supported” on most of these consumer devices.

I just got to the point where I’m experimenting with external storage devices, as I don’t want to have to build a separate farming setup. but it may come to that from similar experiences I’ve seen in the community with regards to transfer latency and bandwidth.

I’m about to play with one of these:

will report back with results :smiley:

Warning: one of the Orico dual drive enclosures corrupted data on both drives for me and I lost plots. Just FYI. Same type, with the magnetic top closure.

Interesting. I can say I was successfully farming about 70TB of plots on my Windows 10 Pro workstation with the plots themselves hosted on a Synology NAS over a gigabit connection and won “several” TXCH and XCH. I have since moved to a dedicated farmer with DAS, but I never ran in to the 30s issue. I assume you are no longer farming from a NAS at this point.

Yeah one NAS is no problem. When I had 3,4,5… that’s when the problems started. It was definitely exacerbated by slapping all the files in one folder.

1 Like

Thanks for the heads up! Jeff stop “discovering” so many ways to lose plots!!

1 Like

I got a NAS and running at services on the NAS makes things much worse.

Codinghorror any idea if 8 drives in raid 5 is better thanks 8 individual drives as basic drives.

Since moving the farm to the NAS my >30 seconds farming has dissapeared but getting these now


@codinghorror is going to say don’t use a NAS at all, I bet you :wink:

I would agree, if all my plots were not trapped on the NAS.

Criminal that the devs never tested this on a NAS.

1 Like

I feel your pain @bertybassett . GROUP HUG MY MAN. Start exfiltrating data ASAP… I was “shotgunning” data off two USB ports at once on each NAS… then copying data via the network port simultaneously for three-way exfiltration…

  1. usb port 1 → external USB 3.5" hdd
  2. usb port 2 → external USB 3.5" hdd
  3. network port → external USB 3.5" hdd (on a diff computer)


As of today I have exfiltrated two NAS’es completely :raised_hands: and only two left… turns out 90tb is a LOT of data to exfiltrate…

1 Like

thanks for the hug, lol.

So you are dumping the NASes then? Yeah I got 55Tb but not where to put to host it unless I build but I got 8 HS so would need 2 PC.

Now I’ve moved the farmer to the NAS (in docker) I don’t get the errors any more, now I get different ones about signage missed.

Yeah it meant I had to buy a LOT of hard drives, as you must double-up to copy them, but I’m glad I did anyway because… drive shortage, and I plan to fill a bunch anyway.

(I will eventually hit my limit, otherwise this turns into a particularly scary episode of hoarders…)