Designing a Microsoft Hyper-V environment affords many areas to configure and design where you will experience a very performant, stable, and robust virtualization environment. However, making errors in the design can certainly lead to major issues when it comes to performance and the stability of the Hyper-V environment.
In this post, we will consider some very important and design considerations when building out a Microsoft Windows Server 2016 Hyper-V environment. When considering the design of Windows Server Hyper-V, the key areas that we will focus on with design are the areas of the Hyper-V network and storage.
Table of Contents
- Components of Effective Hyper-V Design
- Hyper-V Network Design
- Hyper-V Storage Design with SAN
- Hyper-V Storage Design with Storage Spaces Direct
- Concluding Thoughts
There are several established Hyper-V designs that can help to shortcut the process of building out and designing the various parts of the Hyper-V infrastructure.
Let’s take a close look at these design considerations in a bit more detail.
Components of Effective Hyper-V Design
As mentioned the two key areas that often lead to Hyper-V design issues and by extension, performance and stability issues are the areas of networking and storage. With Windows Server 2016, the features and functionality of Hyper-V, in general, have been greatly extended. Additionally, in the areas of the Hyper-V network and storage options, there are new and robust new feature sets that those provisioning greenfield Hyper-V installations of Windows Server 2016 need to consider.
We will look at the following areas of Hyper-V Design Considerations:
- Network Design
- Storage Design with SAN
- Storage Design with Storage Spaces Direct
Hyper-V Network Design
Typically, design errors in the network configuration of a Hyper-V environment can cause tremendous issues with the overall performance and stability of the Hyper-V cluster design. Either mistakes in the design of redundancy in the layout or in implementing NIC teaming can lead to serious issues. Windows Server 2016 also introduced a newer, more effective converged networking solution that allows for reducing complexity in both the management and service related networks as well as increases performance with these as well.
What is converged networking?
Simply put, it allows effectively configuring the many types of Hyper-V network traffic to utilize multiple high bandwidth network adapters instead of dedicating NICs to certain types of Hyper-V network traffic. Windows Server 2016 does not require a NIC team to converge networks in a Hyper-V host. It makes use of a technology called SET or Switch Embedded Teaming that allows the virtual switch to do the teaming. It is important to note however that converged networking can be done with or without the use of SET technology.
It is important to note these new capabilities and best practices when designing out a Hyper-V cluster as many network teams in environments may prefer to implement “teaming” with LACP, etc. With Hyper-V, it is better to use the dynamic methods of implementing this teaming using the new capabilities in the hypervisor.
Aside from the overall recommendation with Windows Server 2016 to utilize the converged network design, when thinking about storage, the network plays a key role in the performance here as well. With Hyper-V in Windows Server 2016, there are new technologies such as Storage Spaces Direct in both the disaggregated and hyperconverged models. Additionally, the more classic design of using a NAS or SAN is most certainly still a relevant technology in many environments with strong use cases.
When thinking about network design for Hyper-V in the three storage designs for Windows Server 2016, there are considerations to be made. Any iSCSI solution that is used for a SAN connectivity should make use of at least two dedicated network adapters to carry iSCSI traffic. With the storage spaces direct designs, at least two RDMA network adapters with at least 10 GB/sec of bandwidth should be used in S2D designs.
When thinking about designing out the physical architecture of the Hyper-V environment it is certainly best design practice to make use of two dedicated switches for use of carrying storage traffic. Physical cabling from each Hyper-V host should be “X’ed” out so that there is no single point of failure with the physical equipment carrying the storage traffic.
When designing out the Hyper-V storage network, it is important to use jumbo frames (typically MTU of 9000 or higher). Jumbo frames allow for reducing CPU load and is definitely recommended for achieving the best performance in the Hyper-V storage design. The physical switches used for the storage network must be capable of utilizing jumbo frames, so this is certainly a design consideration.
Hyper-V Storage Design with SAN
Hyper-V is a very flexible hypervisor when it comes to storage configuration and can meet the demands of most of their environments and different use cases. The first storage scenario to consider is the Hyper-V Storage Design with SAN.
In this model, there are several design considerations that need to be made.
Hyper-V can make use of either the iSCSI or Fibre Channel technology. Fibre Channel has certainly evolved over the years from the early days of only running over optical fiber cables. Today, there are many Fibre Channel options, including Fibre Channel over Ethernet or FCoE. Aside from the redundancy of the storage network that we have already mentioned, when considering the design of the NAS/SAN environment with Hyper-V, be sure to utilize a NAS/SAN with multiple controllers for redundancy.
Using Cluster Shared Volumes is certainly the way to go with Hyper-V storage. Cluster Shared Volumes or CSVs allow for multiple hosts to access the same storage. This allows effective Hyper-V HA in that all hosts have access to the same storage and can essentially run virtual machines in the event of a host failure.
Cluster Shared Volumes require SCSI-3 persistent reservations. Additionally, another consideration that needs to be made is the formatting of the cluster shared volume. Windows Server 2016 ReFS is the newer and next-generation file system from Microsoft that touts many advantages over NTFS. However, at this point, it is recommended not to use ReFS in a Hyper-V cluster where Storage Spaces Direct is not being utilized.
When using ReFS for Cluster Shared Volumes it always runs in file system redirection mode which sends all I/O over the cluster network to the coordinator node for the volume. In deployments utilizing NAS or SAN, this can dramatically impact CSV performance. Use NTFS in a SAN/NAS design.
Hyper-V Storage Design with Storage Spaces Direct
With Windows Server 2016, the most modern storage approach for organizations looking to implement a Hyper-V cluster is utilizing a Storage Spaces Direct Design. Software defined storage implementations are by far the most flexible, agile, and scalable solutions for virtualization and hold several key advantages. There are a couple of models with the Storage Spaces Direct design that can be implemented:
- Disaggregated
- Hyperconverged
As with any implementation, designing the appropriate solution to fit a use case will vary depending on the specific environment and business needs. With Storage Spaces Direct (S2D) choosing between the two different models for Hyper-V cluster design may depend on the size of the environment and licensing cost considerations.
Disaggregated Model
With the disaggregated model the compute and storage nodes are separate entities and separate clusters. The storage cluster is comprised of at least three nodes. The compute node needs at least two nodes and three recommended. In much larger environments the disaggregated model is the more desirable since it allows scaling the compute vs the storage and better fits much larger use cases. Licensing is more expensive in the disaggregated model.
The storage model of disaggregated Storage Spaces Direct designs is pretty interesting. The storage is configured as a scaleout file system, utilizing SMBv3. ReFS as the file system is recommended due to the accelerated VHDX operations.
Hyperconverged Model
When designing Hyper-V clusters in the hyperconverged model, the design is quite a bit more simplified in comparison since both the compute and the storage are “hyperconverged” into the same hosts. Your Hyper-V hyperconverged cluster nodes house both compute and storage that is generally in the form of direct attached storage within the host. Four hosts are recommended since this will allow the multi-resilient virtual disks and erasure coding features. This approach is desirable from a simplicity and scalability approach as well as from a licensing perspective.
Concluding Thoughts
There are certain Hyper-V best practices that need to be carried out when designing Microsoft Hyper-V Clusters for implementation. When implementing any solution, choosing the right design for the environment is crucial as there is certainly no “one size fits all” deployment. Choosing the correct design when it comes to network and storage configuration can mean the difference between a high performance and stable design and one that presents with problems from the initial implementation. This underscores the importance of considering all facets and best practices when designing a Hyper-V cluster installation.
Related Posts:
Benchmarking Hyper-V Cluster Storage Performance
Hyper-V Storage Best Practices
Follow our Twitter and Facebook feeds for new releases, updates, insightful posts and more.