Evolution of Traffic Management
Evolution of Traffic Management
How did query routing and user segmentation evolve to what it is now from its humble beginnings? And where is the future of DNS management heading?
Join the DZone community and get the full member experience.Join For Free
Jumpstart your Angular applications with Indigo.Design, a unified platform for visual design, UX prototyping, code generation, and app development.
The way that admins manage their DNS traffic has changed a great deal just in the last decade alone. Advanced DNS management was once a luxury only used by large enterprise organizations who could afford their own infrastructure and a global network. With the advancement of Anycast networks, these same services have now become accessible and affordable for small business owners and home users. Accuracy and efficiency of routing techniques have made optimal Internet performance possible for organizations of all sizes. But how did query routing and user segmentation evolve to what it is now from its humble beginnings? And where is the future of DNS management heading?
Back in the primordial days of traffic management, one of the first routing services to be offered was DNS Failover. DNS security was still in its early years, attacks were on the rise, and small businesses and home users couldn’t afford to have entire backup systems. That’s where DNS Failover came in.
This service allows users to specify a backup IP address to automatically reroute traffic to in the event the primary IP is downed. Most providers offer failover as a relatively cheap add-on that typically includes free failback. Failover will later evolve to have functionality integrated with monitoring features.
This worst-case scenario solution has saved organizations millions of dollars over the years. But what if you want to manage your query traffic when your site isn’t down? What if you want to optimize your query routing performance?
Failover was just the beginning. Admins realized that they needed to start segmenting their traffic. Why limit yourself to a single point of failure, like only directing clients to one IP address? Load balancing, also known as Round Robin, allows admins to send query traffic to multiple IP addresses on different servers. You can even specify different weights for each address, which we call Weighted Round Robin (WRR).
WRR is particularly useful for larger organizations with substantial amounts of traffic. For example, if you have one server that can handle more traffic than others, you can setup WRR to deliver more traffic to that IP address on that server.
You can also use load balancing to create extra layers of redundancy in the event that something goes wrong with one of your IP addresses. For example, you can combine Round Robin with DNS Failover, so if one of your domains goes down traffic will automatically be moved over to multiple backup IP addresses.
Traffic can only be segmented based on IP address. No location or user based segmentation.
Load balancing was just the beginning of traffic segmentation. With the Internet becoming more accessible to audiences all over the world, admins needed to be able to control traffic based on the location of their end-users.
Geographically based query routing allows admins to slash latency for end-users and getting their queries resolved. The Global Traffic Direction (GTD) uses geographical source-based IP routing to answer end-users within the region they queried. That means queries have to travel less of a distance, meaning faster resolution times.
This kind of location-based routing can also be combined with both Failover and Round Robin for even more redundancy. For example: if there’s a regional outage, you can use GTD to reroute traffic around the problem nodes.
GTD evolved into an even more granular kind of routing, which narrows segmentation down to actual cities or geographical coordinates. GeoIP lookups give admins the highest level of accuracy for custom DNS configurations. These lookups also allow for proximity based query responses, which means client’s queries are answered at the closest possible server. You can also use this technology to filter out malicious or unwanted traffic based on the IP address, location, or ASN of the querying client.
While these are all major advances in traffic segmentation and automated routing, there is no way to analyze traffic or view actual end-user behavior. Basically, configurations are made based on learning from past mistakes or issues, rather than anticipating them using intelligent analytics.
All of these features are great for reacting to issues and performing the necessary actions to resolve latency. However, these issues are still costing companies money, because they are reactive by nature.
Performance-based routing allows users to identify performance issues that are causing end-users latency. These issues tend to originate at the enterprise provider level, like CDN services or ISP’s. The more advanced platforms are then able to automatically reroute traffic to better-performing ISPs or CDNs to minimize latency. This kind of traffic manipulation is breaking ground for the future of automated traffic routing.
But there is so much more room to grow, as enterprise level connections are only a surface level approach. The future lies in Internet Traffic Optimization Services (ITOS), which requires a top-down approach to network monitoring and traffic routing. ITOS solutions look at actual end-user behavior and individual user connections, allowing admins to pinpoint the exact issues for specific users. This approach allows for truly unique configurations to be made to ensure the best possible connections for clients regardless of location, browser, ISP, or device. Performance-based routing becomes substandard and can actually cause issues down the road because segmentation is too broad.
The future of traffic management seeks to expand upon the ideas behind ITOS by combining this individual approach to monitoring with predictive analytics and intelligent routing.
These features allow admins to make informed decisions influenced by real-time performance metrics. Real-Time Stats (RTS) lets admins see a live graph of their incoming query traffic. You can analyze traffic by using filters such as location, record type, time frame, and domain. What makes this technology so revolutionary is that it can be used to identify traffic abnormalities, potential vulnerabilities, and even anticipate attacks.
You can also segment traffic down to the individual user level using Real-User Monitoring (RUM). Rather than guessing about end-user behaviors, you can actually see true user metrics in real time. This information can be used to identify web performance issues, network latency, and enterprise/ISP level issues. Now admins can troubleshoot for unique end-users based on where they are located, what browser they are using, or other connection factors.
Performance optimization is the future of traffic management, which requires all traffic decisions to be informed by monitoring and analytics. Next generation traffic management platforms are solving this by integrating monitoring with their managed DNS features. But this is only the beginning. These integrations open the doors for intelligent and automated query routing based on the information gathered from monitoring services.
Published at DZone with permission of Blair McKee . See the original article here.
Opinions expressed by DZone contributors are their own.