A multi-CDN service that combines multiple CDN providers into a single network is a common and effective way to speed up your web applications for users anywhere in the world. This strategy can also boost failover support in case one of the CDNs you’re using goes down.
But even a multi-CDN service is not immune from performance issues. Intermittent performance drops can and do happen even with this architecture. The likely culprit is a familiar but often ignored one—DNS.
First, let’s break down what typically happens in a multi-CDN setup:
- When we access any domain (ex: tiqcdn.com), the browser checks its local cache for the DNS records.
- If it does not have the information, a query is typically sent to the ISP’s DNS
- Which in turn reaches out to the root server if the information is not already in the cache.
- Typically, the Root Server would redirect the request to a GTLD server (.com in this case) if the info is not cached or if it has expired (depends on the TTL) and the GTLD server would return the Name Server info of the queried domain as shown below:
Now let’s take a look at Scenario 2 where the multi-CDN provider is redirecting the request to a different CDN – Highwinds
The DNS resolver again has to go through the process of resolving this HOST. However, in this case we noticed that the Name Servers of Highwinds failed to respond:
At Catchpoint, we have the option of failing a test at this point when all the available name servers fail to respond. In the real world, reattempts are made to the server if it fails the first time, so the users may not see a failure, but the performance will be hit badly.
To summarize, a multi-CDN service can make your web applications faster and more reliable but is still not failsafe. Monitoring your application alone is simply not enough; DNS monitoring remains crucial.