Link Aggregation in Linux Mint 18.1 and Xubuntu 16.04

đź“… January 7, 2017
coverDo you have a few spare network interface cards?

Want to increase your local network throughput and handle more traffic?

Link aggregation, or bonding, is a technique that combines two or more network interface cards (NICs) into a single virtual network interface for greater throughput.

For example, two gigabit NICs result in 2 Gbps throughput. Three gigabit NICs allow 3 Gbps throughput. Four allow 4 Gbps, and so on. While these are theoretical maximum values and other factors affect network transfer rates, the point is that multiple network cards acting as a single “card” can transfer more data at a time. As an example, more users can access the same server simultaneously without seeing any noticeable drop in transfer speeds.

Linux supports link aggregation out of the box with only a few modifications. Regular, inexpensive network cards and switches can be used, so there is no need to purchase expensive, specialized hardware. This allows you to reuse existing hardware that you might already have on hand. And yes, it works well.

While link aggregation has worked in the past, newer Linux distributions tend to change a few things, so older setup techniques need revision. This is the case with Linux Mint 18.1. For details regarding the benefits of link aggregation, please have a look at the article describing link aggregation in Linux Mint 17 and Xubuntu 14.04 (July 12, 2014). The information is still relevant.

Link aggregation works well in Linux Mint 18.1, but a few changes are needed in order to make it work. However, it is easier than expected!

“Why not create a bond using the Network Manager GUI?”

In Linux Mint 18, you can easily create a network bond by going to Network Connections or Network Settings and adding a new network connection as a virtual bond.

a

Here is a network bond consisting of two physical network adapters in Linux Mint 18.1.

While this works, for some reason, the bonded network (named bond0) must be started manually upon each system boot. This becomes annoying over time. It would be much easier to have the bond automatically connect upon each system boot so it runs in the background.

To accomplish this, we need to do something different. So, if you have created a network bond from the Network Manager GUI, delete it and reboot to ensure all traces of it are cleared.

“Do I need special hardware?”

No. Any existing network interface card compatible with Linux should work fine.  For best results, ensure that all cards you choose to use are gigabit. After all, what is the point of bonding three 100 Mbps cards and wasting three motherboard slots when you can simply install one gigabit NIC that will outperform the three slower cards? Other than practice or necessity, this is a waste of PCI/PCIe slots, so use gigabit.

You can even use NICs containing multiple interfaces. In this example, I am still using the Syba Dual gigabit adapter that has two gigabit NICs in one while using a single PCIe slot.

a

The Syba IO Crest Dual Gigabit NIC is a champion performer and 100% compatible with Linux. I am still using this card, and it continue to run reliably after these years. Plus, it is inexpensive, so multiple cards can be purchased for a few computers. Requires one PCIe x1 slot. This is the same card used in the original article.

The Steps

  1. Load the bonding module at boot
  2. Install ifenslave
  3. Edit /etc/network/interfaces

The technique described in this article was tested using Linux Mint 18.1 with kernel 4.8.16-generic and Xubuntu 16.04.

 

Preliminary Step: Disable Predictable Network Interface Names

The Predictable Network Interface Naming scheme changes well-known interface names, such as eth0, into something esoteric, like enp4s0, for improved naming consistency in systems containing multiple network cards.

Even though link aggregation will function fine with or without the naming, I prefer to disable the modern naming scheme and use the traditional eth0, eth1, eth2 for simplicity, easier readability, and compatibility with existing scripts.

This is not necessary for link aggregation, so if you prefer to use predictable names, skip this step.

To disable them, open /etc/default/grub for editing as root.

sudo gedit /etc/default/grub

In the file, add net.ifnames=0 to either GRUB_CMDLINE_LINUX_DEFAULT or GRUB_CMDLINE_LINUX. Either will work. Make sure that net.ifnames=0 is within the quotes and separated (using spaces) from other arguments that might already exist.

Here is an example:

GRUB_CMDLINE_LINUX="net.ifnames=0"

At the command line, enter

sudo update-grub

to update and save the changes.

Reboot. When the system is available again, enter ifconfig at the command line and check to make sure that you see the regular names, such as eth0, eth1, and so on.

 

Load the bonding module

step1grunge

Open the file /etc/modules as root in a text editor.

sudo gedit /etc/modules

Add bonding on a line all by itself if it does not already exist.

 

# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.


bonding

 

This loads the bonding module at boot time. Linux needs this in order to perform network aggregation.

step2grunge

Install ifenslave

This program handles the bonding. Open a terminal and enter,

sudo apt-get install ifenslave

or install it from Synaptic.

 

step3grunge

Edit /etc/network/interfaces

As root, we need to edit the /etc/network/interfaces file and specify the network bond and its slaves. Return to the terminal and enter,

sudo gedit /etc/network/interfaces

 

You need to know the names of your installed interfaces, so open a terminal and enter ifconfig. You should see something similar to this (edited for clarity):

eth0 Link encap:Ethernet HWaddr 55:d2:84:44:44:44 
 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth1 Link encap:Ethernet HWaddr 00:0a:cd:33:33:33 
 UP BROADCAST MULTICAST MTU:1500 Metric:1

eth2 Link encap:Ethernet HWaddr 00:0a:cd:22:22:22 
 UP BROADCAST MULTICAST MTU:1500 Metric:1

lo Link encap:Local Loopback 
 inet addr:127.0.0.1 Mask:255.0.0.0
 UP LOOPBACK RUNNING MTU:65536 Metric:1

We see four adapters: eth0, eth1, and eth2 are physical network ports where eth1 and eth2 are located on the dual NIC (the first three bytes of the MAC address match), and eth0 is the motherboard’s built-in ethernet port. All are gigabit. lo is the loopback adapter. eth0 and lo are not used for bonding in this example. We will bond the two ports of the dual NIC. However, we can bond the motherboard NIC and any additional cards if desired.

The important point is to note the interface names of the interfaces you want to bond into one. In this case, eth1, and eth2.

 

Back in the file /etc/network/interfaces, add these lines (customization is necessary):

auto bond0
iface bond0 inet static
address 192.168.2.10
netmask 255.255.255.0
network 192.168.2.0
gateway 0.0.0.0
bond-slaves none
bond-mode 0

auto eth1
iface eth1 inet manual
bond-master bond0

auto eth2
iface eth2 inet manual
bond-master bond0

 

This is different from the method described for earlier Linux distributions, such as Linux Mint 17. There are three parts (one for each interface), and all must be included. Make sure to keep the loopback interface intact. No need to edit that.

bond0

auto bond0
iface bond0 inet static
address 192.168.2.10
netmask 255.255.255.0
network 192.168.2.0
gateway 0.0.0.0
bond-slaves none
bond-mode 0

Start by configuring the virtual bond. Here, it is named bond0, but you can name it anything you like within reason and convention.

This sets up a static IP address 192.168.2.10. When using the network, 192.168.2.10 with automatically transfer data across all physical cards using this single IP address. Again, change this and the other lines (netmask, network, gateway) to suit your network. This test is performed on a private LAN.

bond-slaves none

Very important! Set bond-slaves to none. Do not specify eth1, eth2 here or else the bond will not work and the boot screen will timeout, causing very long boot times as the network waits for slave NICs to join that never do. We must specify the slaves separately.

bond-mode 0

This specifies the bond mode. There are seven modes to choose from:

  • Balance-rr (0)
  • Active-Backup (1)
  • Balance XOR (2)
  • Broadcast (3)
  • 802.3ad (4)  LACP (Link Aggregation Control Protocol — requires LACP routers)
  • Transmit Load Balancing (5)
  • Adaptive Load Balancing (6)

These modes allow you to customize how the network bond will behave. The default Round-Robin mode (Balance-rr) is plenty fast, and it is used here. You can use either the mode’s name or its identifying number.

bond-mode 0 is the same as bond-mode Balance-rr. Both tell Linux to use the Round-Robin mode that sends packets across all bonded NICs in sequence. For a brief introduction about the various modes, have a look at How to Configure Network Bonding on Debian Linux.

Configure the Slaves

A slave NIC is an interface that constitutes the bond. In this case, two physical cards represented as eth1 and eth2 are the slaves. We need to create an entry in /etc/network/interfaces for each in order to tell Linux that these cards are to be used with bond0.

auto eth1
iface eth1 inet manual
bond-master bond0

auto eth2
iface eth2 inet manual
bond-master bond0

Not much to do here. Specify each interface as manual and set each to the same bond-master. bond-master bond0 says, “This NIC is used with bond0.” You can have multiple network bonds in the same system.

 

That’s it. Save the changes and perform a quick check by bring up bond0 at the command line.

sudo ifup bond0

If there are any errors in /etc/network/interfaces, they will appear here. Correct them, resave /etc/network/interfaces, and try again. This is mostly an error-checking step for typos in the /etc/network/interfaces file.

Reboot

With Linux, rebooting should not be required. You can bring network interfaces up and down without restarting the system. However, I encountered problems with multiple ifups and ifdowns during testing, and it messed up the networking so much that none of the network changes would take effect. Usually, you need to bring down all slaves in addition to the bond before making changes, but I eventually ran into problems by doing this too many times and in the wrong order. So, reboot the system and watch for errors. The boot time should be just as quick as it normally would.

To watch for boot messages at system boot, press the DELETE key on the keyboard when the Linux Mint logo appears. If the system seems to hang on line something like “A start job…” or “Network.Manager,” then this means that any of the slave NICs is not joining the bond. Usually, a network cable is unplugged or one of the interface’s names has changed, say, from eth1 to eth0 between reboots.

This happened to me on a different test system.

a

Network interfaces mysteriously renamed between reboots. Because of that, the slaves specified in /etc/network/interfaces could not be found for the bond. /etc/network/interfaces was revised to the new names, and the bond worked upon the next reboot.

Test

Assuming the system booted fine, open a terminal and enter ifconfig. You should see something a little different if the bond is working properly.

bond0 Link encap:Ethernet HWaddr 00:0a:cd:88:88:88 
inet addr:192.168.2.10 Bcast:192.168.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1

eth0 Link encap:Ethernet HWaddr 55:d2:84:44:44:44 
 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth1 Link encap:Ethernet HWaddr 00:0a:cd:88:88:88 
 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1

eth2 Link encap:Ethernet HWaddr 00:0a:cd:88:88:88 
 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1

lo Link encap:Local Loopback 
 inet addr:127.0.0.1 Mask:255.0.0.0
 UP LOOPBACK RUNNING MTU:65536 Metric:1

Aha! A new interface should appear named bond0. You should see RUNNING MASTER to denote that the interface is up and that it is the master of the network bond. The slave interfaces, eth1 and eth2, will show RUNNING SLAVE. All three interfaces in the same bond will share the same MAC address. To use the bond, use its IP address (192.168.2.10 in this example) like you normally would.

Here is a screenshot of another test system with bonding set up. eth0 and eth1 are the two ports on the dual gigabit NIC. Naming is not always sequential.

link2

A newly created bond on another test system. Shown on the right is the original /etc/network/interfaces file, and on the left, is the first ifconfig check following the first reboot.

The network bond is usable upon booting the system. No need to manually start it when created using the networking GUI. Speaking of the GUI, if you check the panel’s network applet (if enabled), bond0 does not appear.

a

Where is bond0? Only the slave interfaces are shown in the network applet, and both read “unmanaged.”

a

Neither does bond0 or its slaves appear in Network Connections.

If you need to manage the bond (change the IP address, for example), then you must edit /etc/network/interfaces and restart the network interfaces of the bond.

Is It Fast?

Yes!

The speed increase is noticeable, and it is certainly faster than a single NIC. Two bonded gigabit interfaces acting as one practically doubles the throughput. If you have two 1000Mbps NICs bonded together, then you will have the equivalent of a 2000Mbps card. Three 1000Mbps NICs produce an equivalent of 3000Mbps throughput, and ten 1000Mbps should (in theory) be equivalent to a single 10Gbps card (wired or fiber optic).

Keep in mind that this is only theory until practiced for yourself. I have found that two, three, and four cards in a bond will double, triple, and quadruple synthetic test speeds and multiple user loads, but hardware becomes more of the limiting factor at that point, so speeds will vary and they tend to be lower than theory due to overhead.

As a test, I set up two test systems with bonding using one dual gigabit NIC each. I used an inexpensive gigabit switch, the TRENDnet 8-port TEG-S80g, which continues to operate reliably.

a

TRENDnet GREENnet gigabit switch. Works perfectly! Low-power and built in a metal case. Linux link aggregation works perfectly with this switch.

I performed several synthetic tests using netcat. What throughput would bmon and System Monitor show when transferring 16 GiB of zero data? This was to test the maximum possible without any possible hardware limitations.

The receiving system was set to listen:

nc -l 11111 > /dev/null

The sending system initiated the transfer:

dd if=/dev/zero bs=1073741824 count=16 | nc -v 192.168.2.10 11111

 

a

Here, we are testing the dual-slave bond between two computers configured for link aggregation. Both systems are Linux Mint 18.1. This is the sending system. bmon (left) shows 226.4 MiB/s while System Monitor (right) shows 227.5 MiB/s. The receiving system showed ~226MiB/s receiving rate.

The results were identical to the earlier link aggregation tests, so it works. Both systems successfully ping each other and transfer files. For further details, please have a look at that article.

 

Link Aggregation Notes

Keep in mind what link aggregation is NOT. The link aggregation technique described here utilizing two NICs does not mean that a single transfer will increase to 200 MBps (bytes, not bits) in the same way that a single 10Gbps card will boost transfer speeds of single files. Bonded network cards add more “lanes” through which data may travel, but all “lanes” are limited to 1000Mbps.

An analogy would be a traffic highway with a fixed speed limit. Two lanes will allow more traffic to flow than a single lane, but all lanes are still limited to the given speed limit. This type of bonding operates in a similar fashion. The advantage is for handling greater loads. If a home server has only one gigabit card and ten people on ten different devices connect and download simultaneously, then each person only has 100Mbps (maximum, assuming equal demand) available since the bandwidth must be shared. But 2000Mbps means that the same ten people now have 200Mbps of bandwidth available each.

Also, there are other factors that affect throughput. If the connecting hardware (hard drives, switches, USB devices, the CPU) cannot operate fast enough, then transfer rates will be lower than what the bond is capable of. Protocols also matter. FTP is fast, but SSH tends to be a bit slower. SMB can crawl at times if using older operating systems. EMI and network congestion are also factors.

There is more to link aggregation than bonding NICs and connecting computers. The point is that bonding definitely increases throughput over a single NIC, but do not expect the same screaming fast performance compared to what a true 10Gbps NIC can provide. 10Gbps is a really, really fast single lane. Bonding consists of multiple slower lanes…if you consider gigabit ethernet slow by comparison.

 

Conclusion

This is fun and useful! I have had very good results with link aggregation in Linux Mint 18.1 and Xubuntu 16.04, and it continues to run reliably in the background. Simply set it up and forget about it.

Now, those spare gigabit cards can be put to good use…a use with noticeable results.

Related: Speed Up Your Home Network With Link Aggregation in Linux Mint 17 and Xubuntu 14.04

Advertisements

, ,

  1. Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: