One of the common mistakes that most system administrators do is to think about improving the performance of their Hyper-V setups only when their servers start throwing tantrums. Most important thing is, you’ve got to begin with the storage and network configurations keeping in the mind the kind of performance you want it to give.
Now, if you are asking me what this post is all about?! It’s about the guidelines that cover some of the best practices of storage and networking configurations for running Hyper-V servers- if followed will yield some great results. Let’s take a plunge into it!
As you know, Hyper-V is a type-I hypervisor that has direct contact with the hardware, you need an OS for managing it. The Windows Server Core, Windows Server, and the Hyper-V Server will help you achieve that. Just ensure that you do not install any application on it. Installing any role or feature on the OS is like you sending a message to make it the top priority than running the ongoing jobs on the Hyper-V VMs. How do you allocate storage to the VMs the right way? Read on…
Storage Management:
Let’s start from scratch. Yes. From assigning storage to place VMs on the Hyper-V host.
There are basically 4 main components that that drive VMs running on Hyper-V. They are:
- Hyper-V Virtual Hard Disks
- .BIN files
- checkpoint(snapshot) files
- Hyper-V config files
Make sure that you maintain a single location for the above files since splitting them into multiple locations is definitely not a best practice.
When the VM is powered off, if there are no snapshot files created, the VMs will take the size of the VHDx. When a VM is powered ON, the RAM that each VM will consume will equal the .BIN file. When VM utilize the dynamic memory of the host, you have to allocate enough reserve volumes for the Hyper-V .BIN files for an efficient performance.
Here’s a sample of what we might see in the customer’s field.
Scenario: The admin is running a Hyper-V server with the following resources:
- 400 GB storage for 8 VMs
- Single core Processor with 100 GB RAM(on one host)
With 100 GB, each VM will approximately consume 10 GB of memory. Thus in this scenario, we would need about 80+ GB of free space in the disks to get started with working on the VMs. But on a real-time scenario, system administrators would only manage to allocate 50-70% of the total required free space while configuring storage. Over a period of time, when there is a lot of accumulation of additional files like .ISO and VHD files(which, obviously shouldn’t be happening!)-the disk volumes bleed as there isn’t enough free space to create .BIN files. Remember, the sky is falling if you DO NOT allocate enough reserve volumes for your Hyper-V servers.
Administrators run out of resources in this sequence- Memory, disks, CPU and the Network. When you are about to scale your environment, make sure you scale out the CPUs. The latest, the better as it would offer maximum performance. The next thing you should bother is, planning the target density level for the Hyper-V hosts. Thus, it is very important to find the acceptable limit that doesn’t impact the performance down south.
Hyper-V gives users the flexibility to use any type of storage setup. As much as there’s this flexibility with storage- it becomes the most significant hardware resources that has a high impact on VM performance. Memory is important and falls next. And there is another resource contention problem as well. Since the VHD resides in common storage pools in most cases, it leaves the VHD to compete for IOPS. Now, this is where users would have to make effective use of file system deduplication to decrease storage IOPS.
The release of Windows Server 2012 R2 saw an interesting feature called the Quality of Service Management(QoS) that will allow you to leverage storage wherein you can reserve storage IOPS for a VHD. They come in increments of 8 KB. You can also cap any virtual disk’s I/O by setting a maximum allowable IOPS limit.Since this is on a per VHD basis rather than the VM-level basis, you can try to extract the best possible throughput with the available IOPS.
There will be a time when you will end up getting tired of scaling up physical storage volumes, now this is where you would have to use Windows Storage Spaces. This feature in Windows 2012 R2 abstracts physical storage into a pool of many storage resources. You will be able to create virtual disks on top of the pool without you having to give second thoughts on physical storage allocations.
Have you ever imagined the importance of the file system in connection with storage performance? It’s as equally significant as any other storage best practice. Any system admin should consider if they still would want to provision their VMs with NTFS file system or go ahead with the ReFS volumes. Although there are few features that are missing with ReFS like lack of support for encrypted file system and compression, it still is good enough when it comes to maintaining data integrity and preventing bit rot, and definitely will end up being the better choice when it comes to storing a large quantity of data. Even if you had created virtual hard disks on NTFS volume in the past and you want them to be moved to ReFS volumes- you can do it by disabling the integrity bits through PowerShell for those virtual hard disks. For the newly created VMs, Hyper-V will automatically disable those integrity checks.
Although, Hyper-V does support multiple storage hardware devices like iSCSI, Fibre Channel, Virtual FC etc, the way storage connectivity is established between the storage target and the Hyper-V server plays a very crucial role that is often neglected. If you happen to run any backup application on pass-through disks, the first thing you should be doing is to switch the VM to the saved state and then go ahead with running backups. This is because, rather than connecting to virtual hard disks, pass through disks allows VMs on Hyper-V to be connected only on physical disks. Here’s the catch. Microsoft VSS writers do not have access to pass through disks. However, trying virtual FC connectivity would help you overcome this limitation.
That’s some brief insight into storage best practices I guess. But what if admins end up configuring Hyper-V servers to the wrong network interface? Does that sound like a total disaster? Yeah, it definitely is. So, let’s deal with understanding some of the best network configuration practices.
Network Configurations:
First things first. The virtual machines and the Hyper-V hosts should be managed through separate dedicated network interfaces. This will help you to make effective capacity planning for the VMs and clearly ensures easy management of the backups and cluster management when the hosts are connected to a centralized domain. Networking configurations to the storage targets are as important as the deploying them. Dedicate a separate VLAN or switching infrastructure to the storage setups.
For example, if you happen to use iSCSI you either can go ahead with two 10GB ports or four 1GB ports with the jumbo frame configuration.
When it comes to troubleshooting network infrastructures, most admins are found to having a very tough time owing to forgetting this simple practice. Labeling network interfaces! Labeling the cables with the names of the Hyper-V hosts with their names will make your job super easy.
Among the different types of virtual networks, the external virtual network is the most popular network type which has an added advantage of allowing virtual machines to any network system. Implementing the same network configuration on all the hosts of a Hyper-V cluster server is another thing that shouldn’t be missed. This practice will be of great help when migrating VMs from one host to the other.
If you have found trouble differentiating network cards in the host operating system- let me tell you it’s as simple as that. Just navigate to the properties of the network card. If you happen to find that the checkbox corresponding to ‘Microsoft Virtual Network’ has been enabled, then that implies that the said card has been configured for running the Hyper-V host.
Moreover, have you ever encountered a case where few of your Hyper-V servers that bear huge workloads( like applications) consume a lot of network bandwidth? With VLAN tagging, you can actually isolate traffic to specific network segments. That way, you can avoid network traffic being directed to selective servers or network segments. Adding to it, installing synthetic network interfaces to the Hyper-V integration components will beef up performance since communication occurs over shared memory through the synthetic driver stack.
We’ve finally come to the last leg of this post. And I’m going to talk about monitoring the Virtual Network performance. The overall performance of the network can be monitored from the management OS or on an adapter basis. You will be able to view the total bytes sent/received per sec with respect to both the network adapter as well as the switch. There are cases where you would also be experiencing a drop in network connectivity when migrating VMs to another host within a Hyper-V cluster. If you are into finding ways of performing a successful live migration, configure VMs for prioritizing the network for live migration.
Concluding Thoughts:
This post threw light on some of the best practices that ain’t to be forgotten while setting up your storage and network configurations for your Hyper-V environment. These guidelines are highly recommended for system administrators who would like to build highly-performing Hyper-V servers that would support robust workloads. Most importantly, you’ll find that the time you spend on troubleshooting your setup is minimized to a great extent.
Follow our Twitter and Facebook feeds for new releases, updates, insightful posts and more.