100 Mbps over gigabit lan

I’m increasingly I/O bound as I ramp up plotting speed, and I noticed that my local network transfer speeds are capped at 100 Mbps. I have gigabit lan though, so I’m confused - all PCs are connected via ethernet to an unmanaged switch.

  • All my computers have ethernet ports with a speed of 1 Gbps (Intel GbE LAN on the NUCs, and Intel(R) 82579V on my PC mobo)

  • My unmanaged switch is gigabit capable (Netgear GS308)

  • All my ethernet cables are Cat6

  • Everything is coming from the NUCs via ethernet, through the switch, and into my PC tower - no matter how many transfers are happening, it never adds up to more than 100 Mbps.

I’m stumped. Any ideas?

Ugh, 100mbps is really slow.

Gigabit is slow, frankly, at 111mb/sec… if you’re only getting 11mb/sec then you’re hatin’ life.

I guess start removing / substituting things and try to isolate the culprit. Bad cable? Bad switch? Are the lights on the ports showing up correct link speed?

Oh wow. I totally thought gigabit meant 1 gigabyte per second transfers. Uhh… let me change my question then.

I have gigabit lan, which I just learned today means 111mb/sec. This has become a bottleneck for me, and my completed plots are starting to stack up, waiting to get transferred to the drives attached to the farmer.

Do people have recommendations about how to get around this constraint? 10 gigabit? Or should i buckle and just set up harvesters (I tend to prefer the simple approach of a single full node).

Full saturation of a gigabit LAN should let you transfer around 95 plots per day (a little more, but let’s go with that). Are you creating 95 plots per day?

A consumer grade switch such as the Netgear GS308 is advertised as having 16gbit switch bandwidth, but generally that rarely holds true on consumer gear, and it is usually X bits of data (gigabit in this case) per bank of (usually four) network ports.

So four ports will offer, technically, 8 gigabits of (full duplex) bandwidth, but what frequently happens is that a) those four ports are sharing two gigabits or four gigabits of bandwidth. You frequently need to pull the top off of the switch to figure out what the switching chipset is, and then track down the data sheet to see how much the manufacturer fudged the Atari math.

The other problem is that, sure, you have 16 gigabits of full duplex bandwidth, but the interconnect between each bank of four ports is limited to two gigabits full duplex. Or four. Or eight. It again, depends on the chipset, but frequently, each bank of four ports has full bandwidth switching to any other port on that bank of four. But between each bank of four, the interconnect is lousy.

Which is why the numbers on the box, much like NVMe SSD burst read/write speeds vs sustained read/write speeds aren’t quite so easy. It’s all Atari math - “Well you see,” said Jack Tramiel, “we add up the clock speed of all the chips in the computer, and that’s how fast it is.” Surely aprocryphal as to its source, but that’s pretty much what you’re staring at.

If your plotters are moved to multiple farms, you can hook up plotters to separate switches, and then a common interconnect between those switches. Or you can try moving the plugs around between plotters and farms in the switch to see if that gets what you need. If your machines are Linux you have a decent network card, you can shotgun together multiple gigabit connections to get a speed up which is how I originally had my 40gbit QFSP+ card configured, then I switched to 20gbit ethernet but also do the same. You could have the plotters running at gigabit, and the farm running at dual gigabit.

I have a friend shifting five NUCs of plots per day to a single farm node over gigabit and he is around 80% of network load.

For reference, my NAS is connected to dual 10gig on the switch, but prior to that setup I used all four gigabit ports on the back of the NAS going to four separate bank of four ports on a 16-port mid-tier “enterprise” (haha! not!) switch before I eventually upgraded. Those four separate gigabit connections to four separate banks of ports ensured I wasn’t saturing the internal switching capability of the switch itself.

3 Likes

Thanks for the detailed response! And yep, I’m coming up on the limit of ~90 a day (six NUCs plus some random PCs), so I think I need to figure something out.

I’ll probably take this chance to move all of my farming over to Linux boxes, and try my hand at setting up a harvester. So I’ll have half of my plotters move plots to my main farmer, and the other half move it to my harvester. This way I can split the load a bit - instead of all my plotters firehosing plots at one PC, they’ll go to two and hopefully it won’t saturate my network all day.

I would start by changing the cables. Happened to me and the cables were at fault.

You could also put in a dual port or single port gigabit NIC to the farmer, and a separate gigabit switch. Then one NIC in the farm goes to one switch, and one NIC in the farm goes to the other switch. Have half of your NUCs connect to one switch, and the other half of your NUCs connect to the other switch. Total cost should be under $100.

Hey hey, that worked! My farmer had two ports, so I plugged another one like you suggested (i had tried it before but into the same switch), and I’m immediately seeing speed improvements. Looks like I’m back to one farmer. Thanks a ton for the help!

I would suspect that switch - specifically i have OWNED 3 Netgear gigabit switches that have failed after SEVERAL years use. They don’t always fail completely but often FAIL TO SYNC at anything over 100mbs

Am not clear exactly you are quoting MB (megabyte) or mb(megabit) in your statement

I had to replace my Netgear GS316 precisely for that reason on Wednesday - it won’t sync to anything at gigabit. I threw in a replacement cheapo TP-LINK and it is all tickety boo

The Netgear switches (and probably many switches) tend towards failing capacitors causing degradation of the power supply within the switch

Yeah, the Netgears are prone to dry joints, board shorts and caps. Which also sounds like an awesome day at the beach. You can ascribe all this to either purposeful built-to-fail design or simply cutting one too many corners. Lots of consumer grade networking equipment, due to inadequate ventilation, cramped boxes and lack of thermal dissipation suffer the dry joint issue. You have to look at Netgear as “Can I buy this for the company without having to get approval?” and if the answer is “Yes” then you can pretty much guarantee that it will fail in about 3 years, often sooner.

Next up: All those individuals with precisely one data point stating “Well I’ve owned blah blah blah brand router that I inherited from my great grandfather and it still moves 12 carrier pigeons worth of USB sticks per day…”