Redundant Synchronized Pi-Hole with keepalived and gravity-sync

πŸ“… March 4, 2024
“Help! When Pi-Hole goes down, nobody can access the Internet!”

Pi-Hole is a superb network-wide ad blocker because it blocks connections to forbidden domain names listed in its blocklists when resolving DNS requests, but if Pi-Hole hangs up or is inadvertently turned off for whatever reason, then domain names cannot be resolved and it seems like the Internet is down.

To help protect against this and provide some form of resiliency, we can mirror two Pi-Hole instances so if one goes down, the backup will take over, and users can still access the Internet.

This is simpler than it sounds thanks to a service called keepalived. Let’s see how to set up two Pi-Hole instances using a virtual IP address (VIP) to provide high availability (HA).

keepalived

keepalived is freely available in the Ubuntu repository, and it is a marvel for those seeking to implement a failover cluster system on a network, such as a LAN. If the main server is down, then the backup server will handle the requests using the same IP address. We can use keepalived for practically any server, but we will focus on Pi-Hole here.

Proxmox running two Pi-Hole servers (Ubuntu Server 22.04.04 each). The VIP is defined in the keepalived.conf file in each VM.

keepalived works by implementing the virtual redundancy routing protocol (VRRP) that exposes a virtual ip address (VIP) to your devices. Your devices use the VIP (not the real IP addresses of the redundant servers), but keepalived routes the requests to a primary real IP address. If the primary goes down, then traffic is routed to the secondary real IP address. This is all handled automatically without any admin interaction, so devices only need to know the VIP.

Of course, there are many different configurations that keepalived is capable of to provide high availability (HA) load balancing, so please check the documentation for configuration details. The scenario described above is how we will set up two Pi-Hole servers to function. If the primary Pi-Hole goes down, then the backup Pi-Hole will handle the requests. Our example will be very simple.

Virtual Pi-Hole Cluster

As you might have guessed, we will need to create two duplicate Pi-Hole servers. Ideally, these should be two physically separate devices. Whether real hardware or VMs on separate hardware. Here, we will create two Pi-Hole virtual machines (VMs) in Proxmox on a dedicated Quieter2Q mini PC.

Quieter2Q mini PC with 4G RAM. Although it has eMMC storage, this is slow and cannot be upgraded. One NVMe can be installed for faster storage and greater capacity. It is recommended to use an NVMe and skip the eMMC.

Note: Nobody funds or sponsors this. I found a topic that I liked and wanted to share the results with others. Any links to Amazon are affiliate links to help readers locate the items for more information and to help cover the time spent researching this article since I earn a commission on qualifying purchases at no extra cost to readers.

Yes, the Quieter2Q with its J4125 CPU is plenty to run two Pi-Hole instances in Proxmox. Pi-Hole is lightweight, and each Pi-Hole VM will be running Ubuntu Server 22.04.4 LTS.

I used this device because I already had it on hand along with 256G NVMe. You definitely want to use the faster NVMe with virtualization. In fact, Proxmox will not install to the built-in eMMC storage of the Quieter2Q.

Any mini PC from other manufacturers should also work. There are newer models, such as the Quieter3Q and the newer Quieter3C and Quieter4C, which I have not tried, but both of these are more hardware than we need for Proxmox + two Pi-Hole VMs.

Caution About the Quieter2Q

Beware of the Quieter2Q/3Q. I would not use the Quieter2Q or Quieter3Q to host real-world usage Pi-Hole servers either on real hardware or as VMs on these devices because, for some inexplicable reason, they always lock up. Maybe I have an unreliable unit. I do not know, but no matter what I try, my Quieter2Q simply cannot handle running a single Pi-Hole installation for any extended length of time. Yes, it works at first, but it is not reliable. Sometimes it might freeze after one day, a week, two weeks, or who knows, but eventually my Quieter2Q will freeze and become unresponsive. It can still be pinged, but SSH and Pi-Hole no longer work. Thus, no domain resolution.

No idea why despite trying various Linux distributions, VM configurations, various hardware, and fresh installs. The Quieter2Q (and the Quieter3Q) always locked up for me when running Pi-Hole. Always. No other mini PC I have tried exhibits this phenomenon. Only the Mele Quieter mini PCs (the few I have tried, at least). Other mini PCs, including an actual Raspberry Pi, function perfectly at running Pi-Hole throughout the year without any user intervention or freezing, but the Quieter2Q consistently freezes up and requires a manual reset by unplugging the power cable in order to make it work again.

Proxmox

Proxmox VE 8.1 installs on the Quieter2Q to NVMe, not eMMC. This requirement is hard-coded in the Proxmox installation. It can be disabled by editing some Proxmox files, but this is tedious and I would rather use the much faster NVMe than the slower, built-in eMMC of the Quieter2Q. The NVMe speed is significantly faster.

“Why use Proxmox? How about VirtualBox?”

Both will work, so take your pick. It does not matter which hypervisor you use. I chose Proxmox for this example for its web interface and the ease of cloning a VM. The idea is to create one Pi-Hole VM that is updated and working the way I want it to with all blocklists, and then clone the VM. This is easier and quicker than creating two separate VMs from scratch and easier than vboxmanage.

Pi-Hole VMs

Create the first Pi-Hole VM the way you want it and make sure that it works. Once complete, clone it, and then make a few changes to the backup Pi-Hole server. (Ensure different VM MAC addresses, hostnames, and IP addresses.)

This will not be a tutorial about how to install Proxmox, Ubuntu Server, Pi-Hole, and Unbound. We will assume that two Pi-Hole servers are already up and running. Let’s cluster them.

IP Addresses

Each Pi-Hole VM must have its own static IP address. In total, four unique IP addresses will be used in this example:

  • Proxmox IP address     192.168.100.20 (the real Quieter2Q hardware)
  • pihole1 IP address     192.168.100.21 (VM)
  • pihole2 IP address     192.168.100.22 (VM)
  • Virtual IP address     192.168.100.23 (VIP)

Installing keepalived

Since we are running Ubuntu, simply install it from the command line. Each Pi-Hole server must have keepalived installed.

sudo apt install keepalived

The Proxmox server does not need keepalived installed. We only install keepalived on the Pi-Hole VMs.

Edit the Configuration File

keepalived will create a dedicated directory for itself. SSH into one of the Pi-Hole servers. Let’s start with pihole1 by SSHing into it.

/etc/keepalived

This will be empty, so we need to create a configuration file within named keepalived.conf. cd into this directory and create this file.

cd /etc/keepalived

sudo touch keepalived.conf

Both Pi-Hole servers will need their own configurations. It is mostly the same, but let’s start with the pihole1 VM. Take note of the pihole1 IP address. We will need that.

pihole1 keepalived.conf

sudo nano keepalived.conf

Add these lines to create a VRRP instance:

vrrp_instance pihole {
  state MASTER
  interface ens18

  unicast_src_ip 192.168.100.21
  unicast_peer {
    192.168.100.22
  }

  virtual_router_id 1 
  priority 10 
  advert_int 1

  authentication {
    auth_type PASS
    auth_pass dns12345
  }

  virtual_ipaddress {
    192.168.100.23/24
  }
}

Your exact settings might vary, so consult the documentation for details. In this example, here is what the settings mean according to the keepalived docs:

  • vrrp_instance piholepihole is the name of the VRRP instance. Change this to anything you like.
  • state MASTER – Specifies pihole1 VM to be the master server.
  • interface ens18 – The name of the NIC on the Quieter2Q.
  • unicast_src_ip 192.168.100.21 – Instead of using the default multicasting, this specifies unicasting. Make sure that the unicast_src_ip is the IP address of pihole1, the current machine on which this configuration resides. This will be different for pihole2.
  • unicast_peer 192.168.100.22 – This is the IP address of the other Pi-Hole server, pihole2. Very important. (Note: I found the unicast options in another tutorial, not in the official documentation.)
  • virtual_router_id 1 – Must be the same on both Pi-Hole servers. Use whatever number you like, but they must be identical on both pihole1 and pihole2.
  • priority 10 – Determine the instance priority. Which will be the master or the backup? The master will be the higher number. You can use whatever number you like, but I chose 10 for pihole1. pihole2 will be a lower number, 9.
  • advert_int 1 – The advertisement interval in seconds. No, this will not bombard your server with ads or text messages in the terminal. The default is 1 second anyway, so there is no need to add this, but I included it for experimentation in the future.
  • auth_type PASS – Use a password authentication system.
  • auth_pass dns12345 – The password to use. Up to eight characters allowed. Here, I set the password to be dns12345. Use whatever you like.
  • virtual_ipaddress – And now we reach the most crucial part of this configuration. This is the VIP our devices will use for their nameservers. Pick any IP address you like that is unused and exists on the same network segment. In this case, 192.168.100.23/24 is the DNS IP address. Include the /24 CIDR subnet mask. If we need to configure a router’s DNS address, for example, we would enter 192.168.100.23 in the router. keepalived will direct the request to the master Pi-Hole VM, but if down, then the request will be sent to the backup Pi-Hole VM. Either way, we are using a single IP address to resolve domain names.

pihole2 keepalived.conf

sudo nano keepalived.conf

For pihole2, it is the same process. On pihole2, SSH into it, install keepalived, cd to /etc/keepalived directory, and create and edit its own keepalived.conf.

vrrp_instance pihole {
  state BACKUP
  interface ens18

  unicast_src_ip 192.168.100.22
  unicast_peer {
    192.168.3.21
  }

  virtual_router_id 1 
  priority 9 
  advert_int 1

  authentication {
    auth_type PASS
    auth_pass dns12345
  }

  virtual_ipaddress {
    192.168.100.23/24
  }
}

keeplived.conf on pihole2 is mostly the same, so you can copy and paste between two terminals. However, there are a few changes.

  • state BACKUP – Specifies pihole2 as the backup instance.
  • unicast_src_ip 192.168.100.22 – Swich the Pi-Hole VM IP addresses around. Use pihole2’s IP address here.
  • unicast_peer – Use the the master Pi-Hole instance IP address, which is 192.168.100.21 in this example.
  • priority 9 – Choose a lower number for the instance priority. Must be lower than the master, which was 10 for pihole1.

Leave everything else the same. Note: I accidentally set state MASTER in both configuration files upon my first attempt, and the load balancing and failover still worked without any side effects. However, state MASTER and state BACKUP are the proper configurations.

Here is what keepalived.conf should look like on both Pi-Hole VMs in nano. Save the files.

Enable keepalived

In each Pi-Hole VM, enable it.

sudo systemctl enable keepalived

If the configuration file is correct, there should be no errors. A prompt will return. To check if keepalived is running, enter this in each Pi-Hole VM:

systemctl status keepalived

Viewing Pi-Hole Web Interface

Log into each Pi-Hole instance directly using a web browser.

Pi-Hole web interface. pihole1 (left) compared with pihole2 (right). Both Pi-Hole VMs are up and running as virtual machines on a Quieter2Q using Proxmox. Notice the difference in blocked domains?

The two Pi-Hole servers were left running (the Quieter2Q did not crash during this time), and shown is an example of fake traffic recorded during the course of 24 hours. pihole1 shows the most activity because it is the master VRRP instance. It never failed, so the backup Pi-Hole never had a chance to block any domains.

Testing the VIP

What we see above is nothing special. All we did was log into each Pi-Hole directly to view the interface, but the router was set up to use the virtual IP address (VIP). In other words, we set the router to use the IP address 192.168.100.23 for the nameserver, not either of the real IP addresses listed earlier.

Let’s see what happens when we use the VIP (192.168.100.23) in a web browser. Remember, there is no real hardware server with this IP address.

It works! By entering the VIP, 192.168.100.23, we access pihole1.

Take note of the hostname shown in the upper right corner of the Pi-Hole interface. It reads pihole1. Its IP address is 192.168.100.21, but that is not what we used. We used a VIP, and keepalived ensures that we access the master.

Testing a Downed Master

What happens if the master Pi-Hole instance, pihole1, goes down? Let’s test this by shutting down the pihole1 VM. Open Proxmox, and shutdown pihole1.

pihole1 is powered off while pihole2 continues to run.

Now, try to log into the pihole1 web interface using its real IP address 192.168.100.21, not the VIP yet.

pihole1 is down, but pihole2 is still running.

This proves that the master Pi-Hole server is offline. Notice the blocked domains in pihole2? It is now doing the blocking due to the failover. What happens if we try to access Pi-Hole using the VIP?

pihole2 is now blocking ads. The switchover was automatic and instant.

Look in the upper right corner again. Now, the hostname reads pihole2, which means the failover is working. pihole1 went down, so pihole2 automatically took over to handle DNS requests. We can see that some domains were blocked, so we still have Pi-Hole functionality without ruining the web browsing experience for others.

Let’s reverse the servers. In Proxmox, start the pihole1 VM and shutdown the pihole2 VM.

pihole1 is back online, and pihole2 is shut down.

Same situation as before, but reversed. pihole1 is up, and pihole2 is down when attempting to access each Pi-Hole web interface via its real IP address. However…

…when accessed using the VIP (192.168.100.23), we have access to pihole1’s web interface. Web access is still possible, and domain name resolution continues unhindered.

“What if both Pi-Hole servers are down?”

Then we are back to the same problem as before: no DNS resolution, and people will complain about the Internet not working.

“Then, what good is this approach?”

It increases availability. With one Pi-Hole server, there is guaranteed to be no access if it goes down, but with two, redundant Pi-Hole servers running, if one goes down, then the other will continue to function until the problem can be fixed. At least, that is the plan. Redundancy is a good thing, but this requires twice the number of servers in operation.

A Fatal Flaw

“If Proxmox freezes up, both Pi-Hole servers will go down.”

Yes, that is 100% true, and this is why, ideally, you want to host both Pi-Hole instances on physically separate hardware. In my case, this happened regularly, which is why I warned about using a Quieter2Q earlier. After a day or two of continuous power on, the Quieter2Q froze up and made both Pi-Hole VMs unresponsive. Not even the Proxmox web interface was working. However, I could still ping Proxmox and both VMs even though an SSH login failed. A manual reset by unplugging the power supply was required. I have only experienced this issue on the Quieter mini PC line.

This article is a demonstration to show how keepalived would be set up for redundant Pi-Hole servers. In practice, use two separate mini PCs. DNS is vital to smooth web browsing, so avoid the pitfall of hosting two Pi-Holes on a single physical device. This better isolates a hardware issue that might bring down both VMs.

Synchronizing Pi-Hole

With our redundant cluster glory, how do we keep both of them updated with the same blocklists and other gravity lists?

Pi-Hole does not have this capability built in, so we will use the excellent utility called gravity-sync to ensure that any Pi-Hole blocking changes made on one server are reflected on the other. This way, no matter which Pi-Hole server handles the DNS resolution, it will use the same blocking info whether it be a blocklist or custom blacklist or whitelist.

Installing gravity-sync

The original documentation explains the process well, so there is no need to repeat it here. We will only show how to set up our Pi-Hole VMs for two-way synchronization that keeps both Pi-Holes in sync with each other.

Before we begin, ensure that both Pi-Hole VMs are up and running.

Step 1. Install

Install gravity-sync on both Pi-Hole servers. Log into pihole1 and pihole2, and run this command in each terminal:

pihole1:
curl -sSL https://raw.githubusercontent.com/vmstan/gs-install/main/gs-install.sh | bash

pihole2:
curl -sSL https://raw.githubusercontent.com/vmstan/gs-install/main/gs-install.sh | bash

It is the same command for both systems. During the setup process, you will be asked for a remote Pi-Hole IP address. This is the IP address of the OTHER Pi-Hole, not the one you are installing onto.

pihole1 (IPv4: 192.168.100.21)

Enter: 192.168.100.22 for pihole2

pihole2 (IPv4: 192.168.100.22)

Enter: 192.168.100.21 for pihole1

When gravity-config asks for an IP address, enter the IP address of the other, remote Pi-Hole server, not the server you are installing gravity-sync onto.

Each Pi-Hole VM only has one administrator user. Not a problem here, but if your systems have multiple users, the user must have sudo privileges.

If you make a mistake during setup, do not despair! Just enter,

gravity-sync config

to reconfigure gravity-sync and make the changes.

Type the humorous text phrase (chosen at random) to tell gravity-sync that you really, really want to change its settings.

Step 2. Initial Sync

I experienced no issues. gravity-sync was immediately available on both Pi-Hole systems following installation. I found no need to create keys or enter any extra commands to make SSH work.

Once gravity-sync has been installed to both systems, you will have probably seen this message at the end of each installation:

We need to perform an initial sync between the two Pi-Holes.

This makes sense. One Pi-Hole will probably contain more blocking information than the other, especially if you have been using it. We need to take the data from the authoritative Pi-Hole (pihole1, in this case) that has the gravity data we want to copy to the other Pi-Hole (pihole2, in this case).

In pihole1, the Pi-Hole VM containing the gravity data we want to mirror, enter this to push the data to pihole2:

gravity-sync push

Gravity data will be copied from the authoritative Pi-Hole to the other Pi-Hole we wish to be identical. Perform a push on only one Pi-Hole.

It is also possible to check if the two Pi-Hole VMs are in sync with each other using,

gravity-sync compare

In this case (before a synchronizing), the two Pi-Hole VMs contained different gravity data.

Step 3. Automating Synchronization

We can use push and pull to synchronize the Pi-Hole gravity data between the two servers, by this is a manual operation. Let’s automate it so the operation happens behind the scenes for use without our involvement.

gravity-sync auto

Run gravity-sync auto on both Pi-Hole VMs.

By default, synchronization occurs once about every five minutes — with a little random time thrown in to avoid exact update times. This can result in high read/write wear on an SD card if you are using actual Raspberry Pi hardware, so be aware. We are running this on NVMe in a Quieter2Q mini PC, so this is not an issue.

We can change the sync update for each hour, half hour, or every 15 minutes.

gravity-sync auto hour

gravity-sync auto half

gravity-sync auto quad

Testing the Synchronization

Now, we have high availability Pi-Holes that should auto-synchronize their gravity data with each other. If a domain is added to a blocklist in one, then that same domain should be added to the blocklist of the other Pi-Hole. Let’s test this out by adding a domain to the blacklist of pihole1. It can be anything that your devices can access. I chose ipleak.net for testing since I know that site is accessible from any device of the network. The goal is to verify if that site will be blocked after the gravity is mirrored.

In pihole1 (left), we added ipleak.net as a domain to blacklist. pihole2 (right) does not have this domain added to the blacklist yet. Will automatic synchronization work?

After five or six minutes (if using gravity-sync auto), pihole2 reflected the change.

If pihole1 failed and pihole2 became the failover system, it would block the same domains as pihole1. This is good. How about the other direction? I removed ipleak.net from the blacklist and experimented with iplocation.com since that site is also accessible.

iplocation.com was added to the blacklist of pihole2. It does not exist in pihole1 yet, so devices can still access it.

After a few minutes…

Hmm. That did not work. We should have seen iplocation.com appear for pihole1 on the left. Not sure why, but I will assume that it might be my configuration.

After trying,

pihole1: gravity-sync pull

pihole2: gravity-sync push

pihole2: gravity-sync compare
pihole1: gravity-sync compare

I could not find a matching iplocation.com domain appear in pihole1’s blacklist.

gravity-sync compare reports if differences exist, but this found a perfect match between both Pi-Hole VMs. Odd.

Any changes made on pihole2 should be pushed to pihole1 after about five minutes, but this was not working. I shutdown pihole1 and revisited iplocation.com, and, sure enough, pihole2 took over and blocked iplocation.com. (The failover works great!) So, the pihole2 blacklist was not being synchronized. Somehow, neither gravity-sync was detecting the difference.

I most likely overlooked something, so this is not to say that gravity-sync is flawed. Any changes made on pihole1 are definitely mirrored to pihole2. After using this arrangement for a while, it performed brilliantly. I powered up pihole1, and…

…it works! pihole1 automatically synchronized with pihole2.

Apparently, this was an issue on my part. The mirroring functioned normally after pihole1 rebooted. Success! Now, any gravity changes made to one Pi-Hole are reflected on the other Pi-Hole. Purrrfect! Rebooting is something to try if you encounter issues too.

Conclusion

I am extremely happy with this configuration of a redundant Pi-Hole with auto-syncing capabilities. pihole1 functions as the primary DNS resolver according to keepalived, and when it goes down, pihole2 automatically takes over with whatever changes were made to pihole1 thanks to the automatic syncing of gravity lists by gravity-sync.

Have fun!

, ,

  1. Leave a comment

Leave a comment