My Adventure with 10 Gigabit Ethernet and Linux

πŸ“… January 13, 2021
“Will 10 gigabit Ethernet work with a desktop Linux distribution?”

Curious to find out how well 10 gigabit Ethernet will work with a desktop version of Linux, I embarked on an experiment to see what hardware is required, how well it connects, and how well it performs with Linux Mint 20.

My experiment was met with a resounding success, and everything proceeded better than I expected! In fact, it was easy. However, the world of 10 gigabit Ethernet (10GbE) contains differences not found in the 1 gigabit Ethernet (GbE) world. From hardware to software, there is learning required to make it all work. 10GBase-SR. SFP+. 10GSFP+Cu. Transceivers. DAC. Jumbo frames. These are terms you will not find on the side of a box in the electronics department of your local store. We are dealing with a different puzzle, so the pieces are different.

Here is my journey into this exciting world by setting up a very simple 10GbE network using Linux utilizing 10GBase-SR fiber optics and SFP+ DAC cable to hook systems together.

Overview

  • Quickie 10GbE Information
  • The Plan
  • Hardware – Setting It Up
  • Software – Let’s Do Something Fast!
  • Conclusion

Quickie 10GbE Information

First of all, 10GbE Ethernet is nothing new. In fact, the 10GbE standard that defines 10GbE over fiber optical cable, known as 802.3ae and called 10GBase-SR, dates back to 2002. That is almost twenty years ago as of the writing of this article.

“How fast is 10GbE?”

10GbE is ten times faster than gigabit Ethernet (GbE) defined under 802.3ab dating back to 1999.

“That’s a long time ago. Why isn’t 10GbE more common today on consumer devices if it is so fast?”

From my experiments and experience, it is simply not needed on most desktop systems or consumer devices because GbE is plenty fast and dirt cheap right now.

By comparison, 10GbE is much more expensive, and, despite being speedy fast by comparison, the average consumer would rarely, if ever, utilize the speed boost and tend to remain around the gigabit mark anyway. Ask yourself, “Do I really need to spend ten times the money for a 10GbE device just to stream video over the Internet or to connect my doorbell to the Internet or to connect a surveillance camera to the home network?” Even with today’s (2021) proliferation of consumer network devices, 10GbE is far more than most people would ever need.

When you begin looking around at how many devices connect to the Internet and the throughput they actually use, it becomes clear why 10GbE is making slow progress in the consumer world.

This might change in a future spurred forward by 5G, the cell phone market, WiFi 6, and demands for faster wireless communication, but until then, GbE is the preferred balance between speed and cost.

“Do I need to use server-grade hardware to use 10GbE?”

No, absolutely not. You can easily purchase 10GbE NICs (network interface cards) that plug into any compatible PCIe slot in your existing computer and it will become 10GbE compatible. Certain elements, such as fiber optic cable and associated transceivers are easily within a consumer budget depending upon the brand (like anything else).

Servers and telecommunications companies rely heavily on 10GbE and up (40GbE, 100GbE, 200GbE, and more) because they must transfer copious volumes of data. The everyday user does need these speeds, so 10GbE is plenty. The more elaborate server hardware is not required.

10GbE switches can be pricey compared to GbE switches, but low-cost 10GbE switches exist that run fine for homes and small networks.

“Does 10GbE work with Linux?”

Yes! I set up a fully-working 10GbE network that connected computers together via fiber optic and SFP+ DAC (direct attached copper) cable using a low-cost 10GbE switch that was backwards compatible with GbE networks. This article shows my results.

Short Answer: 10GbE is 100% plug and play with Linux Mint 20. Just plug the 10GbE cards in and go. All existing software works flawlessly without the need to install drivers or special software.

“What about Linux and Windows 10? Do they communicate well over 10GbE?”

Yes. I tested file transfers between Linux Mint 20 and Windows 10 systems. Windows 10 required some setup with driver files, but that was all. Linux requires no driver installation. Once Windows had the driver installed, Linux and Windows could transfer files between each other using the 10GbE link as fast as the hard drives would allow.

“As fast as the hard drives would allow? Are you saying that there are limits to 10GbE?”

Yes, and this is the most important discovery I made since no review or 10GbE article I read mentioned it. If you learn one thing about 10GbE, learn this:

Having 10GbE does not mean your files will transfer at 10 Gb/s.

This is so important and will spare future disappointment by knowing it that we need to repeat it:

Having 10GbE does not mean your files will transfer at 10 Gb/s.

If that fact has not sunk in yet because it contradicts any rosy illusions you might have about fancy 10GbE fiber optic networks, please read it again and again until it sinks in. (10GbE results are demonstrated later to support this statement.) It matters not whether you are using Linux or Windows,

Having 10GbE does not mean your files will transfer at 10 Gb/s.

10GbE has a theoretical maximum throughput of 1250 MB/s (b means bits, B means bytes). However, I never saw these speeds even under optimal conditions. In reality, the best I could achieve was ~940 MB/s transfer rates, and, after verifying my results with various online sources, this is normal. There are three reasons for this.

Reason 1: Overhead

Just as overhead exists with GbE, it also exists with 10GbE. There is always extra data that must be transferred for error-checking, protocols, metadata, and such, so there will always be overhead when transferring an Ethernet frame from one system to another. There is nothing you can do about this because this is how networks operate. This affects the actual transfer rate we see in the end.

Reason 2: Your hardware must be fast enough

With GbE, you likely max out the connection at ~116 MB/s for every file transfer, but you will likely not max out the connection for 10GbE during a single file transfer.

10GbE was faster than I expected, but I rarely reached the expected upper 900+ MB/s transfer rates. Why? Because the CPU must be fast enough, there must be enough RAM, and, most importantly, the hard drives must be fast enough to sustain 900+ read and write speeds on both computers. If anything in the file transfer chain is too slow, then the transfer slows down to the speed of the slowest link despite 10GbE being capable enough to handle faster speeds.

For example, if you are trying to transfer a 4GB file over 10GbE and the file is stored on a mechanical SATA drive with a maximum read speed of 80 MB/s and the destination is an SSD with a maximum write speed of 550 MB/s, the file will be limited to 80 MB/s because the slow hard drive is the bottleneck. After all, how can you make a hard drive supply data beyond its physical limits? 10GbE will not magically make your hard drive read and write faster.

If you use 550 MB/s SSDs on both systems, then the transfer will be limited to 550 MB/s max over the 10GbE network because the SSDs cannot supply or handle data faster than that. SATA-III is limited to 6Gb/s, and some of that is overhead. There is no way an SSD can saturate a 10GbE link unless operating as part of a RAID array or bank of drives.

As another example, I tested transfers to a RAID-6 array containing multiple mechanical drives. While data transferred over the 10GbE link at ~245 MB/s, it did so in bursts. Transfer at 245 MB/s. Wait. Transfer at 245 MB/s. Wait. Repeat. While that data transferred over the 10GbE network fine, it filled up the buffer on the destination system too quickly, and the file transfer had to wait for the data to physically write to the drive and clear the buffers before allowing more data. The end result was a file transfer that took as long to complete as a single 80 MB/s hard drive. Sure, the data was transferred over the 10GbE network faster, but the destination system had to wait for the slow hard drives to finish writing their buffers, and this slowed down the overall file transfer.

Reason 3: The protocol might be too slow

Oh, my. If there is one thing I did not know about 10GbE it is this: Protocols Matter!

Different protocols, such as SSH or FTP, are faster or slower than others, and, thus, software that depends upon the given protocols will be faster or slower too. We do not see any speed differences among network protocols with GbE, but the difference is very noticeable with 10GbE.

For example, SSH involves a layer of encryption. FTP has no encryption. With a GbE network, files transfer at ~116 MB/s for both SSH and FTP. But with 10GbE, SSH tops out at around 145-222 MB/s while FTP can top out around 940 MB/s under optimal conditions or hover around 700+ MB/s. Same file transferred. Different protocols.

This means that a program like rsync, that depends upon SSH for its connection, is also limited to 145-222 MB/s. There is no getting around it. rsync is slow and does not utilize 10GbE to its fullest.

On the other hand, Filezilla transfers the same files over FTP faster at 500+ MB/s if using quality SSDs and both computers are fast enough to handle 10GbE.

This protocol/software speed difference was the biggest surprise for me. Nothing I read during research mentioned it. I thought that if I plugged in 10GbE NICs, everything would function as before — only ten times faster. This is not true. You need to have fast systems that can support 10GbE in order to see 944 MB/s transfer rates, and even then that is a rarity for a single file transfer.

“A single file transfer? What about multiple transfers occurring simultaneously?”

I found that transferring a single file rarely reaches +900 MB/s to saturate the 10GbE link. You would need NVMe to NVMe with fast CPUs. However, you can perform multiple transfers at once if the systems are fast enough. This provides clients with more throughput. Meaning, you could use 10GbE on a media server to 10 clients, and each client, downloading/streaming simultaneously, could have its link operating at 1 Gb/s — just like link aggregation but with a single network cable (optical, DAC, or RJ-45).

I was able to successfully transfer multiple files among different systems on a 10GbE network and watch the throughput increase to 400-600 MB/s even though no individual file transfer reached that speed itself. 10GbE means you have more available throughput.

The Plan

I had the opportunity to configure and test a number of different 10GbE devices in the computer lab. The idea is to combine various 10GbE hardware and cabling to see how well they work together. My 10GbE project will use fiber optic and DAC (direct attach copper) cables to connect the hardware together.

Combination of 10GbE with GbE links.

Since not all devices require 10GbE and operate at 100/1000 speeds, it is important that 10GbE systems can communicate with GbE systems on the same network.

Hardware – Setting It Up

“What is needed to hook all of this together?”

I had the opportunity to use relatively low-cost hardware with great success. It is not necessary to invest in painfully expensive corporate-level hardware to achieve 10GbE speeds and reliability for smaller scale networks. Here is a list of the hardware I used for more information and price checking:

Note: Nobody sponsors this and this is not an endorsement for any product. This is simply what I used. This is a project of my own design. Any links to Amazon are affiliate links provided for more information and to help cover the time spent writing and researching this article since I earn a commission on qualifying purchases at no extra cost to readers.

Any other hardware was already on hand.

TRENDnet TEG-30284 Managed Switch

This has turned out to be a feature-packed switch that does everything I want and more. However, it contains a VERY noisy fan.

TRENDnet TEG-30284 Managed Switch

There are 28 ports: 24 gigabit, and 4 are 10GbE. The GbE and 10GbE ports can “talk” to each other, so they may share the same network ID. Also, the ports are not shared. With some switches, the four 10GbE ports are shared with the last four GbE ports. Meaning, you can use one or the other, but not both. Not so here. There are 24 true GbE ports, and 4 10GbE ports for a total of 28 individual ports. The four 10GbE ports are true 10Gb/s SFP+ ports, and all RJ-45 ports support regular gigabit Ethernet.

The TEG-30284 supports a web interface. Simply log into the switch from a browser and configure to your heart’s delight! Shown here is the SNMP Group Access Table.

One of the features I wanted for this switch was SNMP – Simple Network Management Protocol. This allows an NMS (Network Monitoring Station) to display all sorts of information about what is happening on the network.

An NMS connected to the TEG-30284.

It is the NMS software that generates the fancy graphs, not the switch. The NMS can be installed on the Intel NUC using LibreNMS to monitor your network. I had a lot of fun playing with this switch. There are a wealth of options to explore, so if you are interested in what a managed switch can accomplish without spending thousands, this might make a good entry-level switch to try.

Noisy Fan

The fan in the TEG-30284 is annoyingly loud. I tried replacing it with the Noctua NF-A4x20 FLX Premium Quiet Fan, which is very quiet by comparison, but this did not work. The fan pinout of the switch is non-standard, and it requires a 12v fan that will power on at 5v. The Noctua requires 12v, so it will not spin when connected to the TEG-30284 header.

TEG-30284 Interior. The noisy exhaust fan is located on the left in the picture.

Noctua NF-A4-20 FLX. This is a very quiet fan. A PWM fan will not work with the TEG-30284.

On the left is the default Y.S. Tech fan supplied in the TEG-30284. It feels cheap and low quality, but it spins at 5v despite being a 12v fan. Its pinout is different from the Noctua on the right. The Noctua is whisper quiet, requires fewer amps, and has a more substantial build quality. Plus, it uses the standard 3-pin fan pinout found in computers.

A drop-in replacement using the Noctua will not work. Neither will the Noctua work if the pins are rewired on the Noctua header because the TEG-30284 regulates the fan voltage, which hovers at 5.2v most of the time. The Noctua requires 12v to spin. So, the Noctua never spins except at power on for a brief moment as the full 12v is momentarily supplied to the fan. As a result, the fan remains motionless while the red fan failure light illuminates on the front panel of the TEG-30284.

The TEG-30284 Y.S. Tech fan connector is on the left, and the Noctua fan connector is shown on the right. The red and black pins on the Noctua can be swapped, but the fan will still not work due to the voltage being too low.

For now, I reinstalled the default, noisy fan and decided to save this for a future project.

10GbE NICs

We cannot use existing gigabit NICs to reach 10Gb/s speeds because they are limited to 1Gb/s. 10GbE network interface cards (NICs) are required.

X520-10G-2S Dual SFP+ PCIe v2 NIC produced by a company called 10GTek. The box also includes a half-height bracket and a CD-ROM containing the Windows driver. Linux does not need a driver. Just plug this card into Linux and go.

Two SFP+ ports allow connection to two different networks from one computer. This creates a multi-homed system. It also allows for different types of experiments and flexibility!

The X520-10G-1S is pretty much the same card but with one SFP+ port instead of two. Its packaging also includes a half-height bracket and a Windows driver CD-ROM (not shown because it is boring to look at). No driver installation is necessary with Linux. Just plug it into a Linux system and power on. It just works in Linux.

In the plan, Computer B will contain the dual SFP+ NIC, while Computers B and E contain the single SFP+ NIC. I tried other computers by swapping the NICs instead of purchasing one for each.

About the NICs

Both are 100% plug-and-play compatible with Linux. I tested them with Linux Mint 20, and they work out of the box without any need for manual driver installation. Make sure the power is turned off when installing.

Both cards are PCIe v2 cards, but they will work in both PCIe v2 and v3 slots. I installed one in a PCIe v3 x16 system and another in a PCIe v2 x16 system, and both worked flawlessly at the full 10GbE speeds. Do not let the PCIe v2 let you think that these are slow cards and your network performance will suffer. They worked fine.

Both cards are produced by a company called 10GTek that advertises them as being comparable to the Intel X520 cards and compatible with the 82599 controller. I noticed no performance problems, and they worked great for me. This might or might not be important to you in deciding what transceivers and DAC cables to use. Why? Because of SFP+.

What is SFP and SFP+?

Did you notice something different about these NICs compared to gigabit NICs? Have another look and see if you can spot it:

The network port on the X520-10G-1S (as well as the X520-10G-2S) looks a little…different from an RJ-45 port.

SFP stands for Small Form-factor Pluggable. It is a standard network interface that describes the type of connection. There are two main types of SFP:

  • SFP is Gigabit Ethernet.
  • SFP+ is 10 Gigabit Ethernet

You want SFP+ today. SFP is not designed for 10GbE, but SFP+ is. All hardware I am using in this project uses SFP+.

SFP+ ports look like deep, hollow squares. This is because transceivers plug into them. A transceiver determines what kind of physical medium will carry the network signals. This can be multi-mode fiber optic cable, single mode fiber optic cable, Cat 6/6a/7/8 cable, or DAC cable. By using SFP+, all we have to do is change the transceiver to connect the NIC to a different medium without buying and replacing a different NIC.

For example, using the card shown in the picture above, with can plug in an optical transceiver to connect the computer to a fiber optic cable. Then, we can remove that transceiver to plug in a CAT 6 transceiver in order to connect the NIC to a Cat 6a network using Cat 6a cable terminated with an RJ-45 connector. We leave the NIC in the computer, and there is no need to buy a different one. All we did was change the transceiver.

This is what SFP is all about. Change the transceiver, and you change the medium. Other SFP exists, such as QSFP (Quad SFP) and OSFP (Octal SFP), but these are specialized server standards that go far above anything we are doing. Most of the time, you will see SFP+.

Transceivers

SFP+ might be standardized, but the transceiver technology is not. A transceiver is what plugs into an SFP+ port to connect the NIC to a network. Whether it be fiber optic, Cat 6a, or a DAC cable, there needs to be a transceiver present.

Shown here are fiber optic transceivers that plug into SFP+ ports to enable links over fiber optic cable.

They are shipped in metal containers much like a pack of breath mints.

Inside a container, the transceiver is packed in what appears to be anti-static foam. A small installation manual is included.

Two 10GBase-SR transceivers. These contain lasers, and they are designed to transfer signals up to a distance of 300 meters. This is three times the 100 meter distance limit of 1000Base-T twisted pair copper cabling.

Two transceivers are required to create a link between a computer and a switch. The transceivers plug into SFP+ ports (one on the computer, and the other on the switch), and then the optical cable plugs into the transceivers at both ends using the LC-LC connector.

Each transceiver contains the circuitry needed to convert electrical signals into light. While transceivers can enter the territory of exorbitant prices, the transceivers shown here are inexpensive and work fine at the full 10Gb/s. I had no problems, and they are 100% compatible with Linux, the NICs, and the TEG-30284 I tried.

Transceivers are Hot-Swappable

This means you can remove and insert transceivers while the computer or switch is powered on. They are designed for this, so there is no need to turn off the computer or power down the switch in order to connect a transceiver. Just treat them like you would a Cat 6 network cable.

Transceiver Compatibility

While SFP+ is an agreed-upon standard, the transceiver compatibility among different networking vendors apparently is not. After much research, I learned that it was important to match the correct transceiver with the correct manufacturer.

For example, if you have an Intel NIC, use an Intel-compatible transceiver because a Ubiquiti or Cisco transceiver will not work despite fitting in the SFP+ port. If you have a Cisco switch, then use a Cisco-compatible transceiver because an Intel-compatible transceiver will not work.

From my perspective, this is ridiculous because it only adds an extra layer of confusion and guesswork. Transceivers can be tremendously expensive at the high end, so what if you make a mistake and order one that is incompatible but you cannot find that out until after you test it? This is a waste of time and money.

To be on the safe side, I checked and double-checked which transceivers would work with the TEG-30284 and the X520 NICs. Note that you do not need the same transceiver on both ends of the cable. You can have a Cisco transceiver on one end connected to a Cisco switch and an Intel transceiver at the other end of the cable connected to an Intel NIC. That will work.

For the TEG-30284, users recommended Cisco-compatible transceivers, so I ordered the 10GBase-SR transceiver compatible with Cisco hardware. For the X520 NICs, both were advertised as Intel 82599 compatible, so I order the Intel-compatible transceiver for use on the end of the optical cable that would plug into the NIC.

(Top) The transceiver labeled “For INT” is for Intel-compatible network hardware, and (Bottom) the transceiver labeled “For CSC” is intended for Cisco network hardware.

Did It Matter?

I had to try it out. While the transceivers did indeed work properly as Intel-to-NIC and Cisco-to-TEG-30284 switch, what would happen if I reversed them by plugging the Cisco-compatible transceiver into the Intel-compatible NIC and the Intel-compatible transceiver into the TEG-20384 switch?

It still works!

10GbE performance was the same, and both transceivers were 100% compatible with all of the NICs and the TEG-30284 switch. Whether standards have been improved or manufacturers have made their products compatible with each other, I do not know. But it turns out that this was not an issue for the hardware listed in this project. I could have purchased two of the same type of transceiver (either one), and it would have worked.

Fiber Optic Cable

“What kind of fiber optic cable should be used?”

Unlike transceivers, cable is standardized down to the color. Yes, color. Fiber optic cable for this project requires multi-mode cable. I used OM4 LZSH cable.

Two different lengths of OM4 fiber optic multimode cable.

Rather than splice my own fiber optic connectors, I chose premade cables in the lengths I wanted to test. This is patch cable, and it works well.

Fiber optic cable is labeled as OM1 through OM5. This is multimode cable intended for the transceivers mentioned above. The color of the jacket determines the type of optical cable and intended speed rating for 850nm wavelength:

  • OM1 – Orange. LED. 1Gb/s up to 275 meters or 10Gb/s up to 33 meters.
  • OM2 – Orange. LED. 1Gb/s up to 550 meters or 10Gb/s up to 82 meters.
  • OM3 – Aqua. Laser. 10Gb/s up to 300 meters or 40/100Gb/s up to 100 meters.
  • OM4 – Aqua. Laser. 10Gb/s up to 550 meters or 40/100Gb/s up to 150 meters. (My choice)
  • OM5 – Lime green. Laser. Fastest and newest standard as of the time of this writing.

OM denotes multimode cable, and OS denote single mode cable. Single mode cable is designed for very, very long distances and should be kept as straight as possible for best performance. The transceivers are more expensive, and it is not suited for a project like this. Multimode is designed for patch cables and projects like this involving shorter runs and usually many bends.

The higher the OM quality cable, the faster the speed over longer distances. OM5 is overkill while OM1 and OM2 are older technology. OM1 and OM2 would work for this project, but the cost between those cables and the better OM3 and OM4 is negligible. There is little difference between OM3 and OM4 besides possible distance, so I went with the better quality OM4. Fiber optic cable is surprisingly inexpensive, so I purchased the best within reason.

Other cable colors exist, but these are nonstandard. I would recommend adhering to the standard colors in order to make it easy to identify which cables are designed for which speeds when looking at a clustered cable bundle.

What is LZSH?

LZSH means Low Smoke Zero Halogen, and it describes the cable jacket. It has to do with fire standards and building codes, which can vary by location, so check yours if you plan to perform an installation.

LZSH is designed to emit hardly any smoke if it burns. It is supposed to be more fire-retardant so the cable does not act like a fuse or cause other people to inhale noxious fumes while they are being burned alive in a fire.

Almost all fiber optic cables I saw were LZSH suitable as riser cable for vertical runs according to their descriptions. The other type was plenum cable. LZSH and plenum only apply to the jacket construction, not the 10GbE signaling.

LC-LC Connector

Fiber optic cable, such as OM4, does not use an RJ-45 connector. It uses the common LC-LC connector instead.

Same cable showing one LC-LC connector at both ends. This connector is about the same width as a single RJ-45 connector. It’s small. Notice that 1 and 2 are reversed at both ends of the cable.

This fiber optic cable is known as full-duplex cable because there are two fibers: one fiber for transmitting, and one fiber for receiving. If this was a single cable with only one connector, then is would be called half-duplex.

The T-shaped plugs protruding out from the connectors you see in the picture are caps that cover and protect the delicate fiber optic ferrule inside. The caps are removed when connecting the cable to a transceiver.

Both ferrules are protected with a small cap that is removed before plugging in the connector. Make sure to keep these caps in place when running the cable to help protect the delicate fiber inside.

The fiber optic cable plugs into the transceiver.

The LC-LC connector is small and clicks in place.

The LC-LC connector plugs into the optical transceiver in only one direction. The connectors click into place with a latch and cannot be pulled out. You will damage the cable if you try. A fiber cable can be easily disconnected from the transceiver by pressing the plastic latch on the LC-LC connector.

The OM4 fiber optic cable that I used is small and flexible. It made Cat 6 cable feel stiff and bulky by comparison. At first, the fiber cable felt delicate, and I treated it like glass, but after a few mishaps, I discovered that it is actually very durable. Accidental yanking, stepping, and bending that I thought might have damaged the cable turned out to have no effect. Full 10GbE speeds were still tested.

DAC Cable

The main advantage of fiber optic cable is distance. A network link between two systems can be achieved at great distances not possible using copper cable.

For shorter distances, optical cable can be expensive. It requires the optic cable itself plus two transceivers. If your distances are small, DAC cable costs much less and operates at the full 10Gb/s the same as fiber.

DAC (Direct Attach Copper) cable, also described as 10GSFP+Cu, is wired copper cable with transceivers built in.

DAC cable shown here has the transceivers built into the cable. They cannot be removed.

10GbE works over fiber optic and copper cable. With SFP+ you can choose your transmission medium without the need to replace NICs. DAC cable plugs into an SFP+ port the same way an optical transceiver does. There is no need to install drivers or install special software when using DAC cable. It is only a cable, and you can treat it like any other network cable intended for SFP+. DAC cable is hot-swappable.

“Is this a crossover cable?”

No, but you can use it to connect two computers directly together without the need for a switch. It will behave the same way as a crossover cable. You can also use the same cable to connect the computer to a switch. The same DAC cable is used for both scenarios.

In the plan, this DAC cable connects computers A and B together directly by the SFP+ ports on their NICs. There is no switch involved, and it works perfectly. Just assign each NIC its own static IP address.

The purple line denotes the DAC cable. Notice there is no switch to connect Computer A and Computer B together. This works because the DAC cable can be treated as a crossover cable.

RJ-45 Transceiver

10GbE also works over the twisted pair cable that we are familiar with as long as it meets the specifications. Usually, Cat 6a and higher is what you want to use. Sure, you can use Cat 5e in a pinch for short distances, but it can be unreliable. If you do choose to run 10GbE over existing twisted pair cabling, Cat 6 (preferably Cat 6a) is considered the minimum to make it work reliably.

To connect a 10GbE NIC to RJ-45 cabling, you will need a 10Base-T RJ-45 transceiver that plugs into the SFP+ port on the NIC.

I never went this route, so I cannot report on how well it works. Strangely, the RJ-45 transceivers are much more expensive compared to fiber optic transceivers. Using RJ-45 cabling would have made this project too expensive to test. Yes, you concluded correctly: 10GbE using fiber optics is much cheaper and more budget-friendly than trying to salvage and use existing RJ-45 and Cat 6 cabling.

Due to cost and technology, I would recommend keeping the existing twisted pair for regular gigabit Ethernet, but use fiber optic for new 10GbE links rather than trying to repurpose existing twisted pair cable runs that might be old and unreliable or no longer up to spec. Yes, it requires introducing new fiber optic cable, but for my test, fiber actually cost less than twisted pair, and it was easier to work with.

Connecting Everything

Simply install the network cards and plug in the transceivers and cables like you would with any other networking hardware. There is little difference besides the SFP+ port and type of cable. Use proper anti-static handling practices and take your time to enjoy and study the new hardware. The TEG-30284 does require some set up through its web interface to change its default IP address, but this can be accomplished beforehand. Any further configuration can be achieved after the network is up and running.

 

Software – Let’s Do Something Fast!

“Do I need special 10G software?”

No. Your existing software will work because 10GbE Ethernet operates at the physical layer. Software, running at the application layer of the OSI model, is not affected, so you can use your existing software. If you like rsync and FTP, they will still work with a 10GbE network automatically. Filezilla will use the 10GbE network the same way it uses any other network.

Manually assign a static IP address to the computers if DHCP is not running. Computers A and B need static IP addresses in this example. Ping them to make sure all is working.

iperf3

It was time to test! Using iperf3, I checked the maximum possible link speed between Computer A and Computer B. Install iperf3 on all computers to test. It is available for free from the repository.

sudo apt install iperf3

SSH into Computer B (IPv4 192.168.100.50 in this example) and run,

iperf3 -s -p 8008

From Computer A, enter,

iperf3 -c 192.168.100.50 -p 8008

Replace the IP address as needed.

Underwhelming Results

Here is what the my first results of iperf3 looked like:

7.04 Gb/s? This is indeed faster than gigabit speeds, but we should be seeing 9+ Gb/s speeds.

I arranged two terminals vertically with the server (Computer B) at the top, and the client (Computer A) at the bottom. This helps keep the desktop organized.

10GbE is capable of more throughput than 7.04 Gb/s, which translates to ~700+ MB/s. I tried the same test between Computer A and Computer E.

iperf3. Computer A to Computer E resulted in the full 10GbE bandwidth.

Huh? What is going on? Repeating these test showed the same results with Computer B being too slow. Maybe it has something to do with Computer B?

After watching System Monitor on Computer B during iperf3 tests, all of its CPU cores were running around 90-100%. It turns out that Computer B had an A10-7860K APU, and that is too slow to handle a full 10GbE throughput. Computer E used an i7-4770, and it had no problem returning 9.41 Gb/s results. The speed of the processor matters. Upgrading to 10GbE alone will not guarantee 10GbE network transfers if the CPUs are too slow to process the higher speed.

Why 9.41 Gb/s?

You will not see 10 Gb/s reported by iperf3 due to overhead. 9.41 Gb/s translates to ~944 MB/s, not 1000MB/s like we would expect on paper, and this is normal. I confirmed these numbers with other online results, and a properly working 10GbE network should test at ~9.41 Gb/s with iperf3.

However, this is still plenty fast. By comparison, here is iperf3 with gigabit results:

iperf3 with gigabit nodes maxes out at 886 Mb/s. 10GbE is significantly faster.

iperf3 is a synthetic benchmark. I used it to ensure that 10GbE was possible to achieve with the current setup. This does not mean that files will transfer a this rate.

With that matter solved, it was time to experiment with actual file transfers using SSH and FTP.

Underwhelming File Transfers

Using Linux Mint 20 Nemo file manager, I logged in to Computer B from Computer A to see how fast files would transfer by drag and drop — a normal everyday task. This was compared to a gigabit transfer and link aggregation.

Here is the baseline:

Gigabit Ethernet. sftp drag and drop in Linux Mint 20 Nemo. ~116MB/s

Gigabit Ethernet with dual link aggregation. Two UTP cables connect the computers together. ~163MB/s

Normal gigabit Ethernet transfers at ~116 MB/s max over UTP. Link aggregation improves the throughput by using two UTP cables between the computers for ~163 MB/s. With these numbers, what is possible with 10GbE?

Transferring using sftp. Max: ~249 MB/s. This was highest speed recorded (temporarily) using SSH.

Try as I might, I could not break the 245 MB/s barrier — and this is with NVMe and SSD hard drives. Most of the time, transfers were limited to 145 MB/s over 10GbE with the A10-7860K APU and 222 MB/s with the i7-4770 CPU. The SSDs and 10GbE network are capable of higher speeds than that, so why are these numbers so low?

SSH Protocol is Slow

SSH adds a layer of encryption/decryption to the mix. In addition, I was transferring data between encrypted drives using VeraCrypt. So this makes two layers of encryption/decryption to process, but this still does not explain why the numbers are so low. In fact, the i7-4770 would never exceed 145 MB/s during rsync operations despite experimenting with transfers using unencrypted drives. I double checked the MTU in all hardware to ensure that jumbo frames were on (they were all at 9000). I double checked the hardware configuration. Manually checked the network software. Nothing changed the 145 MB/s limit. This was perplexing.

The TRENDnet TEG-30284 web interface allows you to enable or disable jumbo frames. Jumbo frames were enabled.

It turns out that this is not a hardware issue. It really is a software issue. The SSH protocol itself — and any software that utilizes it — is not optimized for 10GbE. rsync is not optimized for 10GbE either. After much research, I found that I was not alone. rsync performance is limited to 145 MB/s.

I had been connecting and dragging and dropping with Nemo that was using SSH for its file transfers (sftp). That is why the speeds were underwhelming.

When using GbE (standard gigabit Ethernet), these speed limitations cannot be seen because GbE cannot exceed 116 MB/s, but 10GbE can, and this reveals the limitations of SSH and rsync.

FTP is Fast

Okay, if SSH is slow and the protocol is the issue, why not try a different protocol like FTP? Would that improve the file transfer speed? I tested using Filezilla to see more details during the transfer.

Filezilla (Linux) transferring a 10.3 GB file from NVMe to SSD. ~397 MB/s

Filezilla (Linux) transferring 10.3 GB file again. ~411 MB/s

Filezilla (Linux) another 10.3 GB file transfer to a Samsung SSD. ~644 MB/s.

Wow! Simply by changing the protocol to FTP, file transfer speeds jumped. Even the A10-7860K was no longer limited to 145 MB/s. It could easily achieve 400+ MB/s despite being a slower processor. These results were consistent again and again. Switching to SSH resulted in slower file transfers (especially when using rsync) while FTP had no such limits.

It takes about 15 seconds to transfer a 10.3 GB file over 10GbE. The third result shown above is odd. It reports 644 MB/s, but the SSD the file is writing to only supports 520MB/s writes. Perhaps there is buffering taking place?

Something about 10GbE speed measurements is that they would fluctuate wildly during the file transfer. This makes it tricky to obtain consistent numbers. But the transfers were fast and completed in much less time than with GbE.

Mechanical Drives

This is where a computer’s hardware matters. I transferred files to a RAID-6 array, and it progressed in bursts. A single mechanical 7200 RPM drive slowed things down even further. The 10GbE network would transfer data at 245 MB/s and then wait for the drives to catch up. Transfer. Wait. Transfer Wait. It was not a consistent data flow as seen with SSD and NVMe storage because 10GbE is too fast for mechanical drives to handle that level of throughput. This happened with both SSH and FTP, so it is not a protocol issue.

It would be possible, but this would require many mechanical drives operating in RAID-10 or better to sustain higher write speeds.

Mechanical drives are terrible performers with 10GbE. You need SSD or NVMe to truly see improvements on a small-scale network like this.

Windows 10

Windows 10 did not recognize the 10GbE NICs out of the box. I needed to install the drivers from the included CD-ROM (remember those?), and then the 10GbE NIC appeared as a new interface, which I then assigned a static IP address for testing.

10GbE works between Linux and Windows 10 without problems. I was limited to FTP since SSH was not set up with the Windows test system.

Windows 10 always reported odd metrics with 10GbE. Here is a 10.3 GB file being transferred via FTP. In reality, it completes in a few seconds no matter what the time remaining reads.

Windows 10 tended to report weird metrics with 10GbE even though it was also using SSDs. For example, I repeatedly transferred a 10 GB file, but Windows 10 would say it took 35 minutes to transfer even though it completed in about 15 seconds. It also reported 1.7 to 2 Gb/s transfer rates, which is completely unreasonable and beyond the 10GbE specs. Performance was mostly the same as with Linux, but the measured numbers were inaccurate when using the built in Windows tools. Anyway, 10GbE works the same as with Linux.

Filezilla transferring a file from NMVe on Linux to SSD on Windows 10. ~502 MB/s because this is about the write speed limit of the SSD.

Gigabit and 10 Gigabit Transfers

“Can I transfer files from a computer with a 10GbE NIC to a computer with a GbE NIC?”

Yes. As long as the network addressing and gateways properly route the Ethernet frames, the two computers will talk to each other. The transfer will slow down to the slowest link, so even if you transfer a file to or from a computer with a 10GbE NIC, the file transfer will max out at gigabit speeds because the GbE NIC cannot handle anything faster than that.

944 MB/s Transfers are Rare

In theory, the realistic maximum transfer rate of 10GbE should let us transfer a file between two computers at 944 MB/s. I was expecting this after reading about the possibilities of 10GbE, but I rarely ever saw this happen in real life. Even with NVMe to NVMe over FTP, the transfer might begin at 800-944 MB/s, it did not remain at that speed. Some transfers did reach 800 MB/s, but this was rare. The actual transfer rate would lower and stabilize with time to match the CPU and storage interface limits of the computers on both ends of the network connection.

We cannot exceed the write speed of the destination storage device. SATA-III is limited to raw 6 Gb/s. Accounting for overhead, this limit is lower. And then we have the SSD itself. If it is an SSD limited to 480 MB/s write speeds, then the 10GbE transfer is limited to 480 MB/s. Measured speeds might report higher transfer rates, which might be due to caching, but they eventually stabilize to lower rates around 50% of the file transfer completion.

Multiple Transfers Can Reach 944 MB/s

The tests so far measured single transfers between two computers at a time. 10GbE provides plenty of throughput for multiple connections. If several transfers are occurring simultaneously between multiple clients and a server, such as a media server, then those transfers can max out the 10GbE throughput. For example, two clients with SSDs each downloading at 450 MB/s can cause the 10GbE network to actually reach 900 MB/s total providing the server can supply data at that rate.

In some cases, this does not work. rsync is an example. I tried running four simultaneous rsync connections between two computers, but instead of increasing the 10GbE usage to 580 MB/s as expected, it divided the throughput to ~36 MB/s per rsync transfer. I did not have enough 10GbE NICs to see what would happen with multiple clients, so this test was also limited to two computers.

A combination of Gbe and 10GbE connections can work together. 10GbE can supply enough data to multiple GbE clients so each sees a full 116 MB/s.

More throughput is what 10GbE provides, so do not expect that every file transfer will run at 944 MB/s. In practice, most file transfers run at about 400-500 MB/s for a single file between to computers with 10GbE NICs whether they are using SSD or NVMe storage. This is plenty good and much, much faster than 116 MB/s for gigabit Ethernet.

Better Than Link Aggregation

Link aggregation also improves network throughput, but it requires multiple UTP cables and more switch ports and NICs. With 10GbE you only need a single cable whether it be Cat 6 or higher, optical, or DAC. That single 10GbE cable can achieve speeds as fast as or faster than a bundle of aggregated cables. This approach might reduce the redundancy offered by link aggregation, but you can always aggregate 10GbE if needed.

Between the two approaches, a single 10GbE link is easier to install and manage than ten UTP cables, NICs, and switch ports that could achieve about the same throughput.

Conclusion

Wow, this was fun!

The world of 10GbE is not complicated, but it introduces many new pieces to learn involving hardware. It also questions existing ideas regarding protocols, how file transfers work, and software. For example, what I thought would work for regular gigabit Ethernet, such as SSH, rsync, and multiple file transfers, did not carry over and perform as expected with 10GbE.

10GbE also exposed the limitations of software. For example, I had no idea that rsync was such an under-optimized performer. With GbE limited to 116 MB/s, we cannot see this, but with 10GbE, the difference is noticeable. Encryption, whether SSH or VeraCrypt, matters. Protocols matter. Hardware matters.

Upgrading from Fast Ethernet to Gigabit Ethernet is a simple matter of changing the NIC. Presto! You now have files transferring at up to 116 MB/s. But this is not the case with 10GbE. There is more to consider than swapping out a NIC.

All components in the 10GbE chain must also be fast enough to support higher transfer rates. The hardware (CPU, RAM, storage), the software (Filezilla, rsync, Nemo, scp), and the protocols used (SSH, FTP) must be fast enough to allow reading and writing up to the maximum 10GbE rates of 944 MB/s or else transfers slow down. Sure, the result might be faster than 116 MB/s, but a maximum of 145 MB/s for rsync or 222 MB/s for SSH does not justify the cost and feels like an expensive letdown instead. Hardware and software matters.

Having 10GbE does not mean your files will transfer at 10 Gb/s.

This has been a great lab experiment, and it is completely usable for small-scale networks given the hardware used. Speeds are much faster, and I never encountered any hardware issues (besides the annoyingly noisy fan in the TEG-30284). The 10GbE hardware I used was completely compatible with each other and was 100% plug and play in Linux.

Hopefully, this helps provide some information to those interested in learning about 10GbE and Linux.

Have fun!

 

, ,

  1. Leave a comment

Leave a comment