We are setting up a new network to migrate from our old infrastructure and are setting up ESXi 8 from scratch (previously a mix of 5.5./6/7). We have a MSA2060 (10G iSCSI) for the SAN.

I’m struggling to find any latest guides/best practice on ESXi 8 and the latest MSA and all the guides and similar questions are for older ESXi versions. Anyway I have a couple of questions around networking settings in ESXi. As there’s multiple paths to the san, I want to ensure redundancy and maximize throughput etc..

Host / MSA Example

1st question… 1 vSwitch or 2 for the iSCSI network?

1 vswitch


or

2 vswitches

or does is not matter?

2nd question… should all nics be active or any as standby? Does the load balancing method matter (currently route based on orginating vitual port)?

I’m going to be setting round robin on the datastore multipathing policy.

Also if it makes any difference, MSA setup as single pool which I believe HPE recommend so that one controller is doing all the work and the other is just there for redundancy. Small network of 3 ESXi hosts, 10-20 vm’s, 20-40 users.

6 Spice ups

I am not familiar with how MSA is supposed to be setup. I would say that from your diagram, there is no connectivity between switch A and B, so that would have to be two separate vSwitches.

Do not do any standby NICs.

I am not really aware of any differences in how to do networking for iSCSI between 6.5/6.7/7/8, which would explain why you don’t have any updated guides.

You can always call HPE support. I find their storage team extremely helpful and fast to start working with a technician.

3 Spice ups

There’s such little difference between the versions 5-8 that the guide did not need to be rewritten, so find one and it’ll be enough to get you through setup.

Use all NIC’s in active mode, if you have the NIC hooked up…you can dedicate one port on each box for specific tasks, if you wish, or even different virtual networks, even down to specific VM’s…but use them all!

1 Spice up

Thanks for the pointers both sounds like its not too different then from how I’ve previously done it. Regarding the load balance method, got any preference/one over the others? If not I’ll raise a ticket with HPE to see what their techs recommend and look back at the older guides.

1 Spice up

Generally you would have an idea of how things will look before the kit is purchased and ready for use, it likely isn’t going to be too dissimilar to your old setup.

You want 2 vSwitches and a dedicated VMKernel port.

Do not use any teaming on iSCSI connections.

For load balancing use Round Robin (VMware Path Selection Policy - PSP_RR)

4 Spice ups

That’s going to be very subjective to your setup and needs. We’ve got a low enough load that currently, two ESXi hosts with four Ethernet ports each handles our VM load pretty well.

Is the MSA active on both controllers simultaneously? If so then ESXi host needs 2 IPs/VMKernel Ports in A and B to optimise throughput.

Change the load balancing method from route based, to round robin or hash of IP/port/MAC.

Based on the 1 vswitch diagram this will not work - the single vswitch could send traffic for vlan B out of a nic to switch/VLAN A.
All external NICs in a vswitch need to be capable of delivering all possible traffic (same vlans).

1 Spice up

I believe that in a single pool config on the MSA, the controllers operate in an active / standby mode? So controller A will be the main controller unless theres a failure? Now I haven’t been able to distinguish if I need to prioritize iSCSI links to controller A or if theres no real penalty for traffic being distributed across both controllers?

I’ve currently got 2 vmkernal ports (1 for iscsi network A and 1 for iscsi network B)

1 Spice up

1 vSwitch with 4 Active NICs

This is what the Dual Controllers are for. So BTW I think your diagram may have wrong IPs as each controller have 2 ports, thus 2 IPs.

The idea is when one controller fails (or undergoes firmware updates), it will “clone” settings to the other controller then reboots. After the firmware update (or replacement completes), the 2nd controller clones settings to 1st controller and reboots.
So in real world scenario, you may get like 3 pings to 16 pings failure (at most) when the controllers do a fail over “switch” which may not really affect most subsystems…

1 Spice up

The controller’s have 4 iscsi ports each but I’m only using 2 on each hence the 4 IP’s in total (2 different networks split across the controllers).

1 Spice up

The 4 NICs per controller is for redundancy such that each controller have 2 cables doing to each of the 2 switches (in case of cable or switch port failure).
Using 2 switches in for redundancy in case a switch goes down.
Then dual controllers is redundancy in case one controller goes down.
Then by “going down” could mean failures, reboots (for firmware updates), scheduled maintenance etc

So each port in the controller needs its own IP address, which is “cloned” to the 2nd controller.
In certain SANs, even the NIC ports MAC addresses are either masked or manufacturer to be identical to match the ones of the 2 controllers (for cases of sensitive hardware requirements).

1 Spice up