I’ve just had a very strange chat will Dell who have said that a document on VMWare’s site called “iSCSI best practice” is no longer best practice and conflicts with a KB article.

In short, we have some ESXi servers that have 2 dedicated 1Gbit iSCSI nics on different vswitches using different IP ranges and connecting to different physical switches.

iscsi.PNG

According to Dell, this setup conflicts with VMware Knowledge Base

However, this exact layout is listed in section 6.2, figure 8 of http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/whitepaper/iscsi_design_deploy-whitepaper.pdf

so I’m a little confused now!

@Dell_Technologies

@rod-it @johnnicholson @jeffnewman @darren-for-vmware

7 Spice ups

So, after some serious digging, I hit this VMware vSphere ISCSI Port Bindings – You’re Probably Doing it Wrong | virtual.mvp

Which states that: “ISCSI Port Binding is ONLY used when you have multiple VMKernels on the SAME subnet.”

We have multiple VMKernels for iSCSI on different subnets and so the KB article doesn’t apply.

I’m very happy to have someone correct me if I’m wrong because the VMWare article is a bit hard to get through!

4 Spice ups

Oh god. My head hurts.

Here is what Dell specifically configured for us with installation support. Granted this was a few years ago. I should say that vmnic8 and vmnic9 are physically on separate failover switches.

SS-20160914084352.png

2 Spice ups

That makes sense, you have port binding - multiple iSCSI kernels on the same subnet so the KB applies to you, we’ve got two separate iSCSI networks.

@glomo

3 Spice ups

I’m curious what started the conversation with Dell. Were you having some kind of performance issue that prompted the chat with them?

1 Spice up

A couple of weeks ago, half our ESXi estate had a massive CPU wait time spike → Tracing a massive CPU ready time spike

Dell have been looking into the cause and one thing they found was this iSCSI misconfiguration. Our older ESXi hosts have a mix of FC and iSCSI. I’m in the process of getting approval to remove the iSCSI as it’s not required anymore so I can’t see iSCSI being a factor at all as those hosts use FC.

It’s all a bit of a long story.

@networknerd

1 Spice up

I am using port binding as prescribed by VMware with both paths on the same subnet (and going into the same physical switch). I have only a single NIC for each path with no teaming (and no standby adapter defined). Connections are 10Gb.

This is the setup for the first path/port group, which uses vmnic6. The second path is set up identically and uses vmnic7.

This is how I was told to do it WHEN BOTH PATHS ARE ON THE SAME SUBNET, BUT NOT OTHERWISE.

(This is home setup. Office is all fibre channel SAN.)

If the way this is done has changed, I’d like to know.

1 Spice up

Thanks @jeffnewman This is EXACTLY my understanding as well.

At this point, I think Dell are misunderstanding our iSCSI setup.

1 Spice up

I hope it’s just a misunderstanding. I haven’t heard of anything related to this changing. When I originally configured port binding, I was already on vSphere 6, so I think it’s current.

2 Spice ups

I see you originally tagged Darren from VMware. I was wondering last night what he’s been up to, as I haven’t seen him here in ages. Sunday nights, he used to make the rounds and comment on the various posts for the last few days.

I hope he’s all right.

@darren-for-vmware

4 Spice ups

I’m going to be removing iSCSI from these boxes anyway as they are using FC so the iSCSI isn’t required and isn’t even being used.

Same thoughts here.

@VMware

1 Spice up

Just to add, never used static port bindings, always dynamic. Dell (years ago) never updated their best practices for this although VMW did. A long time ago, some of my hosts had static port bindings and they took like 40 minutes to boot up, changed to dynamic and it was down to less than 5 minutes. Finger pointing…

2 Spice ups