Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Why Now Is the Time to Implement Redundant DNS

DZone's Guide to

Why Now Is the Time to Implement Redundant DNS

Some people have the wrong idea about what redundant DNS is. It's important to have a good understanding of it to prevent cyberattacks.

· Performance Zone
Free Resource

DNS has been plugging along for years behind the scenes of the internet, largely unnoticed, as it has performed the vital function of making it easier for everyone to get to the website of their choice. It doesn’t need acclaim, but recent events remind us that it does need attention.

Cybercriminals recently attacked a DNS provider, leading to an internet meltdown across most of the U.S. The reason the attack was so effective is that, for many enterprises, DNS is deployed in a single-threaded fashion. There is no back-up, and if something goes wrong, it represents a single point of failure – often for the company’s entire digital estate.

This attack affected major brands and prevented e-commerce and many types of cloud-based work. It is an example of how the democratization of technology makes everyone equally vulnerable. Given the state of the internet, it’s becoming more common for enterprises to deploy redundant DNS to mitigate these risks as much as possible.

Redundant DNS Is Not Disaster Recovery

Some people have the wrong idea about what redundant DNS is, so let’s debunk a common misconception right now. When people hear the term "secondary DNS," they often confuse it with the notion of data center failover or disaster recovery – that the second set of name servers are a back-up of sorts. Secondary DNS is neither of these, but rather a means of ensuring end users aren’t left with that dreaded “Server Not Found” message, which occurs when their request for a DNS lookup is not answered.

Now, let’s look at a typical DNS set-up. A domain owner delegates a set of name servers at their registrar.

$ dig example.com ns +short  

ns1.primarydnsserver.net.  

ns2.primarydnsserver.net. 

ns3.primarydnsserver.net.  

ns4.primarydnsserver.net.


When an end user wants to get to example.com, one of these four addresses is selected at random. While there is often a good deal of redundancy built into such setups, they can be vulnerable to targeted attacks. If all four of these servers in the delegation are under duress, visitors will have a very hard time getting a DNS response and may not get one at all. Recent attacks have demonstrated this reality.

The Genius of Redundant DNS

$ dig example.com ns +short  

ns1.primarydnsserver.net.  

ns2.primarydnsserver.net.  

ns3.primarydnsserver.net.  

ns4.primarydnsserver.net.  

ns1.secondaryforthewin.com.  

ns2.secondaryforthewin.com.  

ns3.secondaryforthewin.com. 

 ns4.secondaryforthewin.com.

Here is the beauty of redundancy. When a domain owner adds a secondary DNS provider, the pool of available name servers is enlarged and spread across two different DNS networks.

The risk of an outage-causing attack is now mitigated because there are eight available servers in the delegation across two separate DNS networks. If one network becomes overly latent, resolvers will try another server. While this process of timing out and retrying another option during an attack does add latency to a DNS transaction, what is crucial is that the end user will still be able to get to where they intended to go.

Though some may have cast a pall on the idea of adding a redundant DNS service to an existing primary with their tales of woe, it is not as daunting some would lead you to believe. Some of the largest enterprises in the world hedge their bets with this strategy, ensuring that their presence is secured across multiple providers. This is not unlike the recent trend towards diversifying an enterprise’s CDN or cloud footprint, which has been gaining traction throughout the industry.

Wait. First things first. Make sure at the outset that your primary DNS provider allows zone transfers (AXFR) – this is critical. If it is not the case, you would be stuck having to manually push changes to your zones to both providers, which is a rather inconvenient and unfortunate way to go about achieving replication.

Image title

The illustration shows how a user pushes a DNS change to their primary DNS provider. When this change is enacted in the primary DNS, the zone’s serial number increments. This means that the secondary zone file at the other provider is now behind the times and needs to be updated. An optional NOTIFY can be sent to the secondary, letting that system know that a new zone file is ready to be transferred in at specific polling intervals. The process of zone transfer, or AXFR, is initiated and the updated zone file is replicated in the secondary.

Splitting the Traffic

Although AXFR does a fine job of serving basic DNS record types, almost every managed DNS provider out there has built its own way of leveraging DNS to do advanced traffic steering on top of the protocol. This advanced functionality extends beyond what’s covered in the DNS RFCs and, as a result, more advanced features and record types can’t be synchronized across providers using AXFR. Running dual primary servers solves this by allowing the DNS administrator to push changes to both providers while leveraging each platform’s advanced feature functionality, albeit largely in a manual fashion. Where both primary servers are included in the delegation, traffic is split across both primaries.

Inconsistent App Performance

At this point, using an API on both sides to manipulate the advanced features can provide a bit of relief in operating two separate DNS networks. Simple middleware can be written to translate intended changes to work across both networks; however, the intricacies and breadth of a specific vendor’s advanced features may result in inconsistent application performance from one to the other, and you’re stuck using the lowest common denominator in terms of the functionality each platform offers.

A New Architecture to Reduce DNS Failure

To overcome the challenges of redundant DNS while ensuring that end users get to their desired domain, the best option is to have two DNS providers that offer identical features and functionality, in tandem with complete API interoperability. Organizations can outsource the running of their own DNS to a managed DNS service, which often provides additional features that purely RFC-compliant implementations don’t have. In addition, managed DNS providers have advanced traffic routing capabilities that can address technical needs in the marketplace that didn’t exist when DNS was first created. An architecture of this kind mitigates the risk of DNS failure, downtime, and resulting business losses.

Topics:
dns ,performance ,redundancies

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}