Network Connectivity in Azure
Network Connectivity in Azure
Join the DZone community and get the full member experience.Join For Free
Insight into the right steps to take for migrating workloads to public cloud and successfully reducing cost as a result. Read the Guide.
Microsoft Azure is a general purpose, public cloud that provides compute, storage, connectivity and services. The pace of innovation in each of these areas is accelerating, making it harder (in a good way) to keep abreast of the latest developments. The last few months has brought significant enhancements to the connectivity feature set for Azure. Indeed, in its most recent Magic Quadrant reports Gartner made “Microsoft the only public cloud vendor to be named a Leader for both PaaS and IaaS.” This post is a brief overview of the current state of network connectivity for Azure VNETs and cloud services – with current meaning early June 2014.
A cloud service is the organizational container into which Azure compute instances are deployed. On creation, a cloud service is permanently associated with a DNS name of the form myCloudServiceName.cloudapp.net and a location which is one of: region, affinity group, or VNET.
Geographies and Region
Microsoft has deployed Azure into datacenters across the globe. These datacenters are not directly exposed to customers. Instead, customers deploy applications to regions each of which may encompass more than one underlying datacenter. Azure currently provides the following regions:
- East US
- West US
- North Central US
- South Central US
- Brazil South
- North Europe
- West Europe
- East Asia
- Southeast Asia
- Japan East
- Japan West
- China North (via 21Vianet)
- China South (via 21Vianet)
An affinity group is a named part of a region into which related compute and storage services can be deployed. Historically, this co-location lowered the latency between compute and storage. The introduction of the high-speed Generation 2 network in 2012 meant that deploying compute and storage into an affinity group no longer provides a latency advantage. Furthermore, the use of an affinity group added complexity since there was no easy way to migrate either compute or storage from one affinity group to another. One limitation is that deployments in an affinity group could not access new compute features – such as high-CPU compute instances – not provided to that affinity group. Access to the new compute features could require the creation of a new affinity group followed by migration of cloud services into the affinity group.
Affinity Group VNET
The first version of Azure VNETs was built on top of affinity groups in that a VNET had to be created in an affinity group. This means that Affinity Group VNETs are subject to the same constraint with regard to new compute features that the underlying affinity group exhibits.
An Affinity Group VNET can host both PaaS and IaaS cloud services, and provides the following features to them:
- Azure Load Balancer
- Static IP Addresses
- VPN Gateway
These are described later in the post.
Microsoft introduced the Regional VNET at Tech Ed NA 2014. As its name indicates, a Regional VNET is associated with a region and provides access to any of the cloud service compute features provided in a region. Many of the new connectivity features of Azure work only in a Regional VNET and are not available in Affinity Group VNETs. It is not possible to convert an Affinity Group VNET into a Regional VNET. At some point all existing VNETs will be upgraded to be Regional VNETs.
A Regional VNET can host both PaaS and IaaS cloud services, and provides the following features to them:
- Azure Load Balancer
- Internal Load Balancer
- Reserved IP Addresses
- Instance-Level Public IP Addresses
- Static IP Addresses
- VPN Gateway
Currently, Regional VNETs cannot be created directly in the Azure Portal. However, a Regional VNET can be created in the portal by uploading an appropriate network configuration file. The schema for this file is identical to that of a traditional Affinity Group VNET with the exception that the AffinityGroup attribute naming the affinity group is replaced with a Location attribute specifying the region to host the VNET. For example, the following network configuration can be imported to create a Regional VNET with three subnets:
<NetworkConfiguration xmlns:xsd=http://www.w3.org/2001/XMLSchema xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance xmlns="http://schemas.microsoft.com/ServiceHosting/2011/07/NetworkConfiguration"> <VirtualNetworkConfiguration> <Dns /> <VirtualNetworkSites> <VirtualNetworkSite name="AtlanticVNET" Location="East US"> <AddressSpace> <AddressPrefix>10.0.0.0/8</AddressPrefix> </AddressSpace> <Subnets> <Subnet name="FrontEnd"> <AddressPrefix>10.0.0.0/16</AddressPrefix> </Subnet> <Subnet name="BackEnd"> <AddressPrefix>10.1.0.0/16</AddressPrefix> </Subnet> <Subnet name="Static"> <AddressPrefix>10.2.0.0/16</AddressPrefix> </Subnet> </Subnets> </VirtualNetworkSite> </VirtualNetworkSites> </VirtualNetworkConfiguration> </NetworkConfiguration>
The Get-AzureVNETConfig PowerShell cmdlet can be used to download the current network configuration for a subscription. This file can be modified and a new configuration uploaded using the Set-AzureVNETConfig PowerShell cmdlet. Note that there is a single network configuration file in a subscription containing the definition of all VNETs created in that subscription.
VIP for Cloud Services
A cloud service is permanently associated with a DNS name of the form myCloudServiceName.cloudapp.net. However, a single public VIP address is associated with the cloud service only while there is an IaaS VM or PaaS instance deployed into it. This VIP does not change as long as a VM or instance is deployed into the cloud service. The VIP is lost when the last VM or instance in the cloud service is deleted. This means that an A Record can be used to map a vanity URL (e.g., mydomain.com) to a cloud service VIP as long as care is taken never to completely delete all the VMs or instances in the cloud service. A CNAME can always be used to map a vanity URL to the domain URL permanently associated with the cloud service.
Reserved VIPs for Cloud Services
Azure now supports the ability to reserve a public VIP for a subscription. The address is issued by Azure and it is not possible to provide an IP address. A reserved IP address is associated with a single region. A reserved IP can be configured to be the VIP for both IaaS and PaaS cloud services. Reserved IP addresses are a billable feature. Note that Reserved IP addresses can be used with cloud services deployed into a Regional VNET or a region but not with cloud services hosted in an Affinity Group VNET. There is a soft limit of 5 reserved IP addresses per subscription, but this limit can be increased on request.
Currently it is not possible to configure a Reserved IP through the Azure Portal. Using PowerShell, an IP address reservation can be requested for and removed from a subscription as follows:
New-AzureReservedIP –ReservedIPName "anIPName" –Label "aLabel" –Location "West US" Remove-AzureReservedIP –ReservedIPName "anIPName" –Force
The Reserved IP addresses in a subscription can be retrieved using the Get-AzureReservedIP PowerShell cmdlet.
Currently, a Reserved IP address can be associated with an IaaS cloud service only when a new deployment is made (i.e., the first VM is deployed into it). A Reserved IP address is associated with an IaaS cloud service using the New-AzureVM PowerShell cmdlet, as follows:
$vmConfiguration | New-AzureVM –ServiceName "anIaaSService" –ReservedIPName "anIPName" –Location "West US"
A Reserved IP address is associated with a PaaS cloud service by adding a ReservedIPs tag to the AddressAssignments section of the service configuration for the service:
<ServiceConfiguration serviceName="aCloudService"> <Role> … </Role> <NetworkConfiguration> <AddressAssignments> <ReservedIPs> <ReservedIP name="anIPName"/> </ReservedIPs> </AddressAssignments> </NetworkConfiguration> </ServiceConfiguration>
Azure Load Balancer
The VIPs associated with a cloud service are hosted by the Azure Load Balancer which analyses traffic arriving at the VIP and then forwards traffic to the appropriate VM depending on the endpoint declarations for the VMs in the cloud service. The Azure Load Balancer only forwards TCP and UDP traffic to the VMs in a cloud service. Note that this means that it is not possible to ping an Azure VM through the Azure Load Balancer. The Azure Load Balancer supports port forwarding and hash-based load-balancing for both PaaS and IaaS cloud services. The official Azure Team Blog has a post with an extensive discussion of the Azure Load Balancer.
In Port Forwarding, the Azure Load Balancer exposes different ports for the same service on different VMs. It then forwards traffic received on a specific port to the appropriate VM. This is used to expose services such as RDP and SSH, which need to target a specific VM.
In hash-based load balancing, the Azure Load Balancer makes a hash of source IP, source port, destination IP, destination port and protocol and uses the hash to select the destination VM. Hash-based load balancing is used to distribute traffic among a set of stateless VMs, any of which can provide the desired service – e.g., web servers. Note that hash-based load balancing is often erroneously described as “round robin.”
For a PaaS cloud service, the Azure Load Balancer is configured through the endpoint declaration for the roles in the Service Definition file. An input endpoint is used to configure hash-based load balancing. An instance input endpoint is used to configure port forwarding. For IaaS cloud services, the hash-based load balancer is configured through the addition of VMs to a load balanced set while port forwarding is configured directly.
The Azure Load Balancer routes traffic only to VMs it identifies as healthy. It uses a custom health probe to periodically ping the VMs it routes traffic to and through the response or lack thereof identifies their health status. Both IaaS and PaaS cloud services support custom health probes. If a custom health prove is not provided for a PaaS cloud services, the Azure Load Balancer pings the Azure Agent on the instance which responds with a healthy state while the instance is in the Ready state.
Custom health probes are configured by providing either a TCP or HTTP endpoint. The Azure Load Balancer pings the endpoint and identifies a healthy state by either a TCP ACK or an HTTP 200 OK. By default, the ping occurs every 15 seconds and a VM is deemed unhealthy if an appropriate response is not received for 31 seconds. An application is free to provide its own algorithm when deciding how to respond to the ping.
Instance-Level Public IP Addresses
Each cloud service containing a deployment has a single VIP associated with it. Azure also supports the association of a public IP address with individual VMs and instances of cloud services deployed to a Regional VNET. An Instance-Level Public IP Address (PIP) can be used to support services like passive FTP which require the opening of large port ranges which is not possible with a cloud service VIP. Currently, there is a soft limit of two PIPs per subscription.
Traffic directed to a PIP does not go through the standard Azure Load Balancer and is instead forwarded directly to the VM. This means that the VM is exposed to the internet so care should be taken that a firewall is configured appropriately. The Azure Load Balancer permits only TCP and UDP traffic to reach a VM. However, ICMP traffic (i.e., ping) can be sent successfully to a VM with an assigned PIP.
A PIP is associated with an IaaS VM using the Set-AzurePublicIP PowerShell cmdlet. This modifies a VM configuration which can then be used with New-AzureVM or Update-AzureVM to create or update a VM respectively. The Remove-AzurePublicIP is used to remove a PIP from a VM configuration which must then be applied to the VM with Update-AzureVM.
A PIP is associated with a PaaS cloud service by adding a PublicIPs tag to the AddressAssignments section of the service configuration for the service:
<ServiceConfiguration serviceName="aCloudService"> <Role> … </Role> <NetworkConfiguration> <AddressAssignments> <PublicIPs> <PublicIP name="aPublicIPName"/> </PublicIPs> </AddressAssignments> </NetworkConfiguration> </ServiceConfiguration>
The actual PIPs assigned to a VMs in a cloud service are retrieved using the Get-AzureRole PowerShell cmdlet, as follows:
Get-AzureRole -ServiceName “aCloudService” -Slot Production
DIPs for Cloud Service VMs and Instances
Azure automatically assigns dynamic IP addresses (DIP) to VMs in IaaS and PaaS cloud services. A DIP remains associated with the VM as long as it is allocated. When an IaaS VM is de-allocated the DIP is given up and may be allocated to a new VM.
The DIP allocated to a VM depends on whether or not it resides in a VNET. If the VM is not in a VNET then the DIP is allocated from a DIP range managed by Azure. If the VM is in a VNET then the DIP is allocated (sequentially) from the DIP range configured for the subnet in which it is deployed. The DIP is allocated through a special type of DHCP which provides an essentially infinite lease on it.
A VM in a PaaS cloud service keeps the DIP for as long as it is deployed. A VM in an IaaS cloud service keeps the DIP while it is allocated, but it loses it when it is de-allocated. In this case the DIP may be allocated to a different VM and the original VM may get a different DIP when it is once again allocated. Note that the VM preserves the DIP even if it is migrated to a new physical server as part of a server-healing operation.
It is crucial that no change is made to the NIC configuration on a VM. Any such changes is lost if the VM is ever redeployed as part of a server-healing operation.
Static IPs for IaaS Cloud Service VMs
Azure supports the allocation of static IPs to VMs deployed to an IaaS cloud service in a VNET. This is useful for VMs providing services such as Domain Controller and DNS. The general guidance is that static IP addresses should be used only when specifically needed.
When both IaaS and PaaS cloud services are deployed in a VNET care must be taken that there is no chance of overlap between the IP addresses allocated to PaaS instances and static IP addresses used in the IaaS cloud service. Doing so creates the possibility that a PaaS instance is allocated a DIP that is associated with a currently de-allocated IaaS VM. This is best managed by allocating the static IP address from a subnet containing only static IP addresses.
A static IP address is associated with an IaaS VM using the Set-AzureStaticVNetIP PowerShell cmdlet. This modifies a VM configuration which can then be used with New-AzureVM or Update-AzureVM to create or update a VM respectively. The Remove-AzureStaticVNetIP is used to remove an Instance-Level Public IP Address from a VM configuration which must then be applied to the VM with Update-AzureVM. The Test-AzureStaticVNetIP PowerShell cmdlet can be used to check whether a specific IP address is currently in use.
An endpoint in Azure refers to the combination of a port number and protocol that is configured for a role in a PaaS cloud service or a VM in an IaaS cloud service.
Three types of endpoint are configurable for a role in a PaaS cloud service:
- Input endpoint
- Instance input endpoint
- Internal endpoint
Both types of input endpoint are exposed through the Azure Load Balancer. An input endpoint is used for hash-based load balancing while an instance input endpoint is used for port forwarding. An internal endpoint exposes a port addressable by other VMs in the cloud service. Internal endpoints provide discoverability through the Azure Runtime API of both the VM DIP and the port actually used on individual VMs. If permitted by the firewall any VM in a VNET can connect to any other VM in the VNET, regardless of cloud service, as long as the actual DIP and port number to connect are known.
The Azure Load Balancer exposes two types of endpoint for a VM in an IaaS cloud service:
- Load-balanced availability set
- Port forwarding
A load-balanced availability set is used to configure hash-based load balancing while a port forwarded endpoint does just that. Both types of endpoint can be configured at any time.
The Azure Load Balancer routes traffic to the VMs or instances in a cloud service if an appropriate endpoint has been declared. Note that traffic sent directly to a PIP bypasses the load balancer completely. The Azure Load Balancer provides an ACL capability which further restricts the traffic routed to VMs and instances. The ACL feature allows a set of accept and deny rules to be configured for an endpoint, and the Azure Load Balancer uses these rules to make a decision as to whether or not inbound traffic should be routed to VMs and instances. For example, a rule can be configured to only route traffic coming from a single IP address – for example the public IP address of a company.
Internal Load Balancer
Azure supports the creation of an internal load balancer, which exposes an internal endpoint through which traffic can be routed to one or more VMs in the same VNET or cloud service. Currently, Azure supports internal load balancing only for IaaS cloud services in a new cloud service or Regional VNET. An internal load balancer is created inside a cloud service.
The following PowerShell cmdlets are used to manage internal load balancers:
Add-AzureInternalLoadBalancer is used to create an internal load balancer inside a cloud service. The internal load balancer is identified by name. Get-AzureInternalLoadBalancer retrieves the name and DIP of the internal load balancer in a cloud service. An internal load balancer which is not currently being used can be deleted using Remove-AzureInternalLoadBalancer.
Load balanced endpoints are created on the VMs to be load balanced by using Set-AzureEndpoint in combination with New-AzureVM or Update-AzureVM depending on whether the VM already exists. Set-AzureEndpoint has two parameters to manage the internal load balancer: -InternalLoadBalancerName which specifies the internal load balancer to use; and -LBSetName which provides a name to identify the set of VMs to be load balanced.
Azure supports two types of VPN:
S2S is enterprise focused and uses a hardware or software VPN router on the client side to share the connection among many users. Azure provides router configuration scripts for many standard Cisco and Juniper VPN Routers.
P2S is developer focused and uses the RAS software that comes with Windows. User authentication is provided through X-509 certificates. A master X.509 certificate is uploaded to the Azure Portal and is then used to create user-specific X.509 certificates which are then distributed individually to each user. P2S is therefore easy to use for small trusted development teams but does not scale well because of the inability to revoke the user certificates.
Both types of VPN connect to a VPN Gateway configured inside an Azure VNET. This gateway is fully managed by Azure and is deployed in an HA manner. Traffic can be routed in either direction across the VPN making it possible to do things like connect a front-end PaaS cloud service to an on-premises SQL Server. Another important use of the VPN is the ability to perform all system administration over the VPN without the need for SSH and RDP endpoints.
It is possible to configure a VPN between two Regional VNETs hosted in different regions. This is done by creating a VPN Gateway in each region and cross-referencing the address ranges for each VNET. VPN traffic going across such a VPN is routed across the Microsoft Azure backbone rather than be routed across the public internet.
Microsoft has worked with various network partners to provide a direct private connection into Azure, in a feature named ExpressRoute. This comes in two flavors:
- Exchange Provider
- Network Service Provider
Exchange Provider is offered by hosting companies such as Equinix and Level 3 in which they provide their customers with direct connections into Azure. Alternatively, network service providers such as AT&T and BT provide MPLS connectivity into Azure.
ExpressRoute provides direct connectivity to an Azure VPN Gateway and then to the VNET hosting the gateway and any cloud services hosted in the VNET. It also provides access to Azure Storage, but not to other Azure services such as Azure SQL Database. ExpressRoute provides speeds up to 1Gbps during preview with this limit increased to 10Gbps when it goes GA.
Azure provides name resolution services for VMs in IaaS and PaaS cloud services. It also provides name resolution for up to 100 VMs in a VNET, provided their fully-qualified domain name is used. Otherwise, a DNS server must be provided if name resolution services are needed. This is specifically the case in hybrid deployments where on-premises servers must contact Azure VMs over a VPN. The DNS server is configured in the network configuration file for the subscription. A DNS server should be deployed with a static DIP.
Azure Traffic Manager
The Azure Traffic Manager provides global traffic routing to destinations both in Azure and elsewhere. It works through the dynamic mapping of short-lived DNS entries to actual IP addresses. Once a Traffic Manager profile is configured it provides a DNS name, such as mydomain.trafficmanager.net, to the application which is mapped dynamically to an appropriate DNS entry for the actual cloud service to be used. The Azure Traffic Manager can be used with to route traffic to websites outside Azure.
Traffic Manager provides the following load-balancing choices:
- Round Robin
The Performance choice indicates that the application is deployed as distinct cloud services in multiple geographic locations, such as every Azure region, and that the user accessing the application should be automatically redirected to the cloud service with the lowest latency. Internally, Traffic Manager is provided with latency tables for different routes across the internet and uses these tables in choosing where to route the user.
The Round Robin choice indicates that as new users should be allocated to the underlying cloud services in a round-robin manner.
The Failover choice indicates that there is a primary cloud service and one or more passive secondary cloud services. All traffic is sent to the primary cloud service and when it fails the traffic is sent to one of the configured secondary cloud service instead. The Traffic Manager detects the health of the underlying cloud service by performing a ping every 30 seconds against a configured URL hosted by the cloud service. If the Traffic Manager fails to receive a response more than twice it marks the cloud service as unhealthy and starts routing traffic to the secondary cloud service.
The new Regional VNET capability of Azure has allowed the provision of a wide variety of network services such as internal load balancers, instance-level public IP addresses, and VNET-to-VNET VPN capability. This post provided a brief summary of these features and described how they fit into the existing network capabilities of the Azure Platform.
Published at DZone with permission of Neil Mackenzie , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.