In our last post, we discussed Virtual Hardware for VMware Virtual Machines where we spoke about Virtual CPU, Memory, Network, and Storage which can be assigned to Virtual Machine using VM creation Wizard or it can be added later based on demand. We also mentioned that Virtual Machine Hardware version was updated in every vSphere release which focusses on the corresponding guest operating system features and Virtual Hardware features.
Dedicated this article to talk about the features which are available in Virtual Machine Hardware version 13 which was made available in VMware vSphere 6.5 release. To begin with, the first feature we will discuss is the increased RAM capacity where we can assign up to 6TB of RAM and maximum of 128 vCPU to our Virtual Machine when running on Hardware Version 13.
Another important feature that has been made available with VMware vSphere 6.5 and hardware version 13 is Non-volatile Memory Express Virtual Storage Adapter NVMe in guest (Virtual NVMe), which is a logical device interface specification for accessing non-volatile storage media attached through a PCI express bus in real and virtual hardware. Virtual NVMe devices provide high-performance guest block I/O allowing more VDI VM’s per host and more transactions per minute, vNVME also supports back-end VSAN and vSphere Virtual Volumes storage.
When working with NVMe we need to ensure that we are using one of the supported guest operating system browser like Windows 7 and Windows 2008 R2 for which we need to separately download Hotfix from Microsoft, Windows8.1,2012R2, Windows 10,2016, RHEL, CENTOS, NeoKylin 6.5 and later, Oracle Linux 6.5 and Later, Ubuntu 13.10 and later, FreeBSD 10.1 and later, MacOS X 10.10.3, Debian 8.0 and later, are few supported guest operating systems.
NVMe devices are highly parallelized, to achieve the maximum throughput they ensure to backup multiple Virtual Disks. High Throughput of NVMe creates a High CPU Load which means they are a better fit for hardware configurations with many cores per sockets and multiple sockets.
When compared with virtual SATA devices, vNVME virtual storage adapters access local PCIe SSD devices with significantly lower CPU cost per I/O and Significantly higher IOPS.
Another important feature that has been made available in VMware vSphere 6.5 HW version 13 is RDMA over Converged Ethernet (RoCE) which is a network protocol that allows Remote Direct Memory Access over an Ethernet network.
Direct memory access is allowed by RDMA from the memory of one computer to the memory of another computer without involving the Operating System and CPU. The memory transferring is offloaded to capable Host Channel Adapter’s HCA of RDMA.
When the same host is used for running the virtual machines or without a Physical RDMA device then the data transfer is memcpy between Virtual Machines. However, for the Virtual Machines that resides on different ESXi hosts and have a Physical RDMA connection, the communication happens with between the Virtual Machines using the underlying physical RDMA devices. Last but not the least when Virtual Machines are running on different ESXi hosts and when one of the ESXi hosts doesn’t have a Physical RDMA device, the communication fails backs to TCP based channel and the performance is degraded.
With vSphere 6.5 RDMA communication is possible between Virtual Machines which are running with Paravirtualzied RDMA adapter helping a Virtual Machine to get direct memory access, when configuring it on a Virtual Machine we need to ensure that Virtual Machine is running with Hardware Version 13 and vCenter Server Appliance 6.5 is used with Distributed Switches.
Follow our Twitter and Facebook feeds for new releases, updates, insightful posts and more.