Virtual Network in Hyper-V
Virtual Network in Hyper-V
Join the DZone community and get the full member experience.Join For Free
See why enterprise app developers love Cloud Foundry. Download the 2018 User Survey for a snapshot of Cloud Foundry users’ deployments and productivity.
Our blog series on virtualization has gotten off to a good start. For today’s article we discuss network virtualization in Windows Server 2012 Hyper-V. For the basics, virtual machines need to be presented with a network port in order to gain access to the physical network. Since the host has a small number of physical network adapters the hypervisor(Hyper-V) needs to split up the network traffic on to virtual network switches. These virtual switches contain virtual ports, and the virtual ports show up in each virtual machine for which they are assigned as a traditional network adapter. The diagram below shows this connectivity:
Now before we start deploying virtual switches in the production environment, let’s first take a step back and think through how best to insure our network is fault tolerant. New in Server 2012 is the capability to team physical network ports together. By this process, we are able to bind two physical network card ports together forming a an aggregation point for the network traffic which allows for failover ability in the event of a physical network failure. To access the NIC Teaming wizard, open Server Manager, in the navigation pane click on the Local Server. In the Properties area, click on NIC teaming. Here the available network ports are displayed and the administrator is allowed to select which ports will be part of the new aggregate.
There are two modes for NIC teaming: Switch Independent and Switch Dependent.
Switch Independent – In this mode, the network connections are physically attached to different switches on the network providing alternative routes for the traffic. This type of network team can be setup in two different ways. Active/Active or Active/Standby are the titles for these two different options. In Active/Active mode each network port is live and the traffic is balanced across the available ports. This allows for higher bandwidth as well as fault tolerance, as a network port can fail without disrupting traffic, however when less network pipes are available during a failure then less bandwidth is available as well. In Active/Standby mode one port is configured as the primary port for traffic, the secondary port is standing by ready to work in the event the first port fails. Bandwidth on the Active/Standby configuration stays constant in that only a single connection is live at any one point in time.
Switch Dependent – All of the network ports are physically connected to the same switch. This is a more traditional method for aggregating links together on to one physical switch, and supports Link Aggregation Control Protocol(IEEE 802.3ax) however using LACP is optional and requires the physical switch to support the protocol as well. A generic mode for balancing the traffic is utilized by default should the LACP protocol not be enabled.
A couple of important notes concerning NIC Teaming as pertains to the virtual infrastructure. If the traffic is made up of large TCP sequences, for example a Hyper-V Live Migration or Shared Nothing Live Migration, only one port is utilized in the team for the traffic. This cuts down on the amount of retransmits that would occur as packets may arrive out of order on the receiving end of this high speed network discussion if more than one port were utilized. Hyper-V takes care of this for you.
Now that we understand the principals of the physical network stack let’s talk through the basic options available for the virtual network stack.
Virtual Network Options
In Hyper-V Manager we will take a look at the Virtual Switch Manager and setup a new virtual switch:
Notice that we have three options available to us: External, Internal, and Private. Let’s talk through each of these choices first.
External – For network traffic that will be destined to the physical switches in the server’s location. This is the switch type most commonly used in production environments to serve data for end users and other server systems on the network.
Internal – For network traffic that will stay local to the host server and virtual machines that reside on the server.
Private – For network traffic that will reside on this virtual switch with connectivity only to other virtual machines with virtual network ports allocated on this virtual switch. Not event the host can access this network.
Once the switch has been given a name, a physical network port is chosen from the drop down menu and the creation process can be completed. External and Internal switches may have a VLAN ID assigned to them if desired. Private switches cannot be associated with a VLAN ID.
A couple of interesting tidbits concerning Hyper-V virtual switches. The Hyper-V server will create a pool of 256 MAC Addresses by default, however this number can be modified by changing the default address range. In Hyper-V Manager click on MAC Address Ranger, notice that the minimum and maximum values are displayed on the right hand side of the screen, these can be edited to enlarge or shrink the range of possible virtual network port MAC addresses which would be automatically assigned:
You also have the option of adding static MAC addresses to virtual machines, however please be sure to set a MAC address that is not in the range above.
Virtual switches do not have a limit for the number of virtual ports that can be connected to virtual machines. This is true in Hyper-V, however other virtualization platforms may have limitations so please check your hypervisor’s documentation before proceeding with large numbers of network connections.
Advanced Virtual Networking Functions
In Server 2012 Hyper-V we have a few more advanced options that we can take advantage of for improving performance, isolating traffic, and offloading network payloads to hardware more prepared for the work than the traditional server bus. These options require special physical network adapters that include these features built into the cards themselves. To enable these options we look to the Settings screen on each virtual machine that would need these features enabled:
Virtual Machine Queue is a technique for storing incoming packets into separate queues then directing the packets toward each virtual machines directly. This bypasses the normal mechanism performed by the virtual switch.
IPsec task offloading allows for the network cryptographic processing to be directed the physical network card instead of all being run through the software based in the hypervisor. The ability to change the amount of offloaded security associations is configured here as well.
Single-root I/O virtualization(SR-IOV) is a feature which allows for the virtual machine to bypass the virtual network stack and send traffic directly through the physical network card.
Live migration on Server 2012 Hyper-V is supported when virtual machines are configured to use SR-IOV. For other hypervisors please check the documentation before considering the implementation of SR-IOV, as utilizing this function may severely cripple the flexibility that virtualization promises to the datacenter.
Now jump in and start your own virtual network. If you have not set up a Server 2012 lab, grab the bits here and get started. Remember the Early Experts program is in full swing as well, with step by step guides on configuring your own virtual lab.
Published at DZone with permission of Tommy Patterson , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.