Contents

Demo lab: 2-Node Hyper-v Part 1

This post is intended to demonstrate the process of rolling out a 2-node, hyperconvered Hyper-v cluster. I’m going to go over some planning considerations, as well as point out some important differences and requirements for production deployments rather than the humble lab we’ll build.

Lab Overview

Hardware

I’ll be deploying two nodes: hyperv1 and hyperv2. These devices should be as identical as possible, both in hardware and in BIOS and firmware versions. Make sure you download the latest bios updates from your manufacturer and get them installed before moving forward. Both nodes will have Windows Server 2019 Datacenter Edition installed, in core mode. I always use core mode on hypervisors to reduce wasted resources (that guests could be using!) as well as reducing attack surface.

For this lab, I’ve thrown together some spare parts that should meet our basic needs:

  • 2x Supermicro A1SRi-2758F motherboard with:
    • Intel Atom 8-core C2758 processor
    • 32GB RAM
    • 4x GbE LAN
  • 2x 64GB SSD drive (OS install)
  • 6x SATA storage drive
  • 2x SSD cache drive
  • Vlan capable switch with at least 4 open ports

I can’t stress enough that this shouldn’t be your go-to build for a production cluster! For a home lab setup, however, this is a nice low powered build with IPMI access so you can turn it on when you need it without having to get up from your workstation. I’ll be using two small NAS cases I have handy with 4 hot swap bays, so 3x SATA/1x SSD per node. Storage configuration will be simple: a unique pool with all available disks distributed across both nodes via Cluster Shared Volumes called csv-1 and csv-2.

I would strongly recommend having a dual 10GbE adapter for each node, along with a 10GbE switch if possible. Unfortunately I don’t currently have available 10GbE infrastructure, so I’ll be using 1GbE for this lab.

Note: For domain services, I’ll be using the domain win.hrkrdr.com I setup previously which runs on separate infrastructure. If you’re going to run your domain controllers on the same hardware as this lab, you need to ensure that the guests aren’t clustered. In order for the cluster to come online DNS and authentication services have to be available. Therefore, either run your domain controllers on separate hardware or directly on the two hyper-v nodes you’re provisioning in this guide. My recommendation in that situation is to have one DC on each node.

Networking

Here’s the overview of the networks we’re creating to allow cluster and guest communication:

  • Management - Native vlan - 172.16.30.0/24. This network is our LAN, where Active Directory services and RDS/Powershell remoting, etc. will happen.
  • Cluster - Vlan 100 - 10.172.172.0/24. This is the heartbeat network the cluster will use to ensure availability.
  • Storage - Vlan 101 - 10.173.173.0/24. Storage network used for SMB and live-migration of guests.

In a production environment, I would definitely want to leverage SMB Multichannel, however I don’t have the infrastructure available in the lab to accommodate this. We’ll be making use of Switch Embedded Teaming to aggregate the four 1GbE interfaces into three virtual NICs. My access layer switch is already configured with the appropriate vlans.

System Configuration

To start out, make sure your BIOS settings are configured on both nodes:

  • Default to booting UEFI
  • All necessary virtualization extensions enabled
  • Date/time is correct
  • Disable PXE and set boot options
  • Make sure all your hardware is seen by the BIOS
  • If necessary do a memtest.
  • Turn off any RAID functions, we’ll be doing software defined storage here so we want any HBA adapters to be in JBOD mode.

Once that’s taken care of, install Windows Server 2019 Datacenter on both nodes. Since the setup for both nodes are virtually identical, I’m only going to run through one node in this post. You will need to configure the second node the exact same way (aside from giving it unique names and IP addresses).

After Windows is installed you’ll need to make sure your system’s drivers are installed. A quick powershell query will give you a list of devices that aren’t configured:

# Return devices not currently configured (unknown)
Get-CimInstance Win32_PNPEntity |where ConfigManagerErrorCode -ne 0 |select Name,DeviceID

Name               DeviceID
----               --------
Coprocessor        PCI\VEN_8086&DEV_1F18&SUBSYS_00008086&REV_02\3&11583659&0&58 
SM Bus Controller  PCI\VEN_8086&DEV_1F3C&SUBSYS_72708086&REV_02\3&11583659&0&FB 
[Base](Base) System Device PCI\VEN_8086&DEV_1F15&SUBSYS_00008086&REV_02\3&11583659&0&98

# Optional: ID your system so you can download relevant drivers:
Get-CimInstance Win32_ComputerSystem |select Model

Model 
----- 
A1SAi

With that taken care of, use sconfig to set the computer name and completely update both systems. You can also, if desired, enable Remote Desktop Services for ease of management.


Operating System Configuration

After several reboots for updates, we’re ready to install the features necessary to get our nodes clustered:

$features = @("Hyper-V","Data-Center-Bridging","Failover-Clustering","RSAT-Clustering-Powershell","Hyper-V-PowerShell","Storage-Replica")

Install-WindowsFeatures $features -IncludeManagementTools

Network

Let’s begin by renaming the network adapters so we can keep them straight:

# Make sure you know which interface is which on the switch
Rename-NetAdapter 'Ethernet' -NewName 'MGMT-0'
Rename-NetAdapter 'Ethernet 1' -NewName 'MGMT-1'
Rename-NetAdapter 'Ethernet 2' -NewName 'STOR-0'
Rename-NetAdapter 'Ethernet 3' -NewName 'STOR-1'

Then configure Switch Embedded Teaming:

New-VMSwitch -Name 'Mgmt' -NetAdapterName 'MGMT-0','MGMT-1' -EnableEmbeddedTeaming $True -AllowManagementOS $False
New-VMSwitch -Name 'Storage' -NetAdapterName 'STOR-0','STOR-1' -EnableEmbeddedTeaming $True -AllowManagementOS $False

# Add the adapters: 
Add-VMNetworkAdapter -SwitchName 'Mgmt' -ManagementOS -Name 'mgmt-0' 
Add-VMNetworkAdapter -SwitchName 'Mgmt' -ManagementOS -Name 'clus-100' 
Add-VMNetworkAdapter -SwitchName 'Storage' -ManagementOS -Name 'stor-101'

# Configure Vlan membership:
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName cluster-100 -Access -VlanId 100
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName stor-101 -Access -VlanId 101

# And finally, verify: 
Get-VMNetworkAdapterVlan -ManagementOS

VMName VMNetworkAdapterName Mode     VlanList
------ -------------------- ----     --------
       mgmt-0               Untagged
       stor-101             Access   101
       cluster-100          Access   100

Advanced Considerations

Once you’ve got your adapters ready, it’s a good time to think about some advanced configuration you may need to implement based on your environment. For the most part, I’ll only be briefly touching on the subjects below. Once you’re working on building a production environment, make sure you understand what these technologies do and what the ramifications are of implementing them.

  • Jumbo Frames: There’s a lot of back and forth about whether jumbo frames are a good idea or not. In general I enable them on Hyper-V systems based on feedback from Microsoft.
# Your switch needs to have jumbo frames enabled for this to work correctly. 
Get-NetAdapterAdvancedProperty -Name * -RegistryKeyword "*jumbopacket" |Set-NetAdapterAdvancedProperty -RegistryValue 9014

# Test with ping (Ping doesn't encapsulate the ICMP header, so we're subtracting 28 to get 8972)
ping -f -l 8972 10.173.173.6
  • VMQ/RSS: technologies to mitigate the impact of network traffic on CPU. Depending on storage load and CPU configuration, you’ll want to ensure that your storage networks are appropriately balanced. For example:
# With a hyperthreaded processor, use multiples of 2 to reference cores. It's recommended to leave the first core (core 0) alone. 
# Define RSS NIC access as CPU core 8 through 14
Set-NetAdapterRSS 'StorageNet-1' -BaseProcessorNumber 8 -MaxProcessorNumber 14

# Use Get-NetAdapterRSS to see current configuration paying attention to processors defined and NUMA distance: 
Get-NetAdapterRSS 'StorageNet-1' |select MaxProcessors, RSSProcessorArray

MaxProcessors                                   : 8
RssProcessorArray: [Group:Number/NUMA Distance] : 0:0/0  0:1/0  0:2/0  0:3/0  0:4/0  0:5/0  0:6/0  0:7/0
  • RDMA: is very important to the performance of Storage Spaces Direct (SMB Direct). To use RDMA, your hardware must support it. Currently Windows supports Infiniband, iWARP, and RoCE.
# My lab doesn't have RDMA capable NICs, but if it did: 
Get-NetAdapter 'Storage*' | Enable-NetAdapterRDMA

# And to verify 
Get-NetAdapterRDMA
  • Data Center Bridging: DCB is an extension of Ethernet that makes dedicated traffic flows possible in converged networks. DCB tags traffic with a specific value (0-7) called a class of service. All nodes in the chain (switches and servers) need DCB configured consistently to function. Importantly, DCB ensures lossless transmission of SMB traffic between nodes. Otherwise your cluster will be negatively impacted by TCP retransmits introducing I/O delays.
# Example of DCB configuration
# Define a policy for SMB Direct
New-NetQosPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3

# Enable flow control 
Enable-NetQosFlowControl -Priority 3

# Turn off flow control for everything else
Disable-NetQosFlowControl -Priority 0,1,2,4,5,6,7

# Apply the policy to the storage adapters
Enable-NetAdapterQos -InterfaceAlias 'StorageNet-1'

# Assign 70% bandwidth allocations to SMB via the ETS algorithm
New-NetQosTrafficClass "SMB" -Priority 3 -BandwidthPercentage 70 -Algorithm ETS

Now that we’ve tackled some network tuning, we can get our IP Addressing and Vlan associations sorted out:

# Define our adapter settings. Remember this needs to be done on both nodes and they obviously need unique IPs. 
$mgmtIP = @{
	InterfaceAlias = 'vEthernet (MGMT-0)'
	IPAddress = '172.16.30.5'
	DefaultGateway = '172.16.30.1'
	PrefixLength = '24'
}
$clusIP = @{
	InterfaceAlias = 'vEthernet (CLUS-100)'
	IPAddress = '10.172.172.5'
	PrefixLength = '24'
}
$storIP = @{
	InterfaceAlias = 'vEthernet (STOR-101)'
	IPAddress = '10.173.173.5'
	PrefixLength = '24'
}


New-NetIPAddress @mgmtIP | Out-Null
New-NetIPAddress @clusIP | Out-Null
New-NetIPAddress @storIP | Out-Null

# Disable automatic DNS registration on cluster and storage networks
Set-DNSClient -InterfaceAlias 'vEthernet (CLUS-100)' -RegisterThisConnectionsAddress $False
Set-DNSClient -InterfaceAlias 'vEthernet (STOR-101)' -RegisterThisConnectionsAddress $False

# Define DNS resolvers for MGMT
Set-DNSClientServerAddress -InterfaceAlias 'vEthernet (MGMT-0)' -ServerAddresses ('172.16.30.10','172.16.30.11')

Alright, so far we’ve:

  • Configured our hardware
  • Installed Windows and all updates
  • Defined our networks
  • Applied some network optimization
  • Applied our IP Addresses

In the next post we’ll join our nodes to the domain and provision the hyperconverged failover cluster, as well as finish the configuration of Hyper-v.