<\/use><\/svg><\/div><\/a><\/div><\/p>","upvoteCount":0,"datePublished":"2012-06-14T07:36:33.000Z","url":"https://community.spiceworks.com/t/vmware-hosts-network-ports/148908/5","author":{"@type":"Person","name":"keith22","url":"https://community.spiceworks.com/u/keith22"}},{"@type":"Answer","text":"I guess I should also mention that we are running ESXi5 and this was setup by a VMware Engineer knowing we are running ESXi5.<\/p>","upvoteCount":0,"datePublished":"2012-06-14T07:38:19.000Z","url":"https://community.spiceworks.com/t/vmware-hosts-network-ports/148908/6","author":{"@type":"Person","name":"keith22","url":"https://community.spiceworks.com/u/keith22"}},{"@type":"Answer","text":"
What I was advised to do by a friend that also uses vmware is to just create two vSwitches.<\/p>\n
vSwitch1: vmkernel for iSCSI and vMotion<\/p>\n
vSwitch2: management and data<\/p>\n
With each switch having 4 ports for load balancing.<\/p>","upvoteCount":0,"datePublished":"2012-06-14T08:03:52.000Z","url":"https://community.spiceworks.com/t/vmware-hosts-network-ports/148908/7","author":{"@type":"Person","name":"cameronlemaster3558","url":"https://community.spiceworks.com/u/cameronlemaster3558"}},{"@type":"Answer","text":"
Yes, I assumed you had a single NIC bound to each VMKernel. We used to be setup that way (As that used to be the best practice), but both Dell and VMWare storage pointed us away from that, and to the newly updated best practices documentation.<\/p>\n
With ESXi5, you need to separate out each of your iscsi VMKernels onto it’s own vswitch. ESXi5 has a bunch of issues/bugs when you have 2 iscsi vmkernels in the same vswitch (vmware still tries to use the unbound NIC, causing all sorts of trouble, which we unfortunately ran into as well).<\/p>\n
VMWare has released several patches trying to resolve these issues, however after a little bit of googling, you’ll find that people are still running into the issues with the old configuration. After we separated out the VMKernels onto individual vSwitches, all of our problems went away.<\/p>","upvoteCount":0,"datePublished":"2012-06-14T08:05:32.000Z","url":"https://community.spiceworks.com/t/vmware-hosts-network-ports/148908/8","author":{"@type":"Person","name":"alaricwhitney","url":"https://community.spiceworks.com/u/alaricwhitney"}},{"@type":"Answer","text":"
That is good to know. I will have to look at changing that in our next scheduled down time. Thanks!<\/p>","upvoteCount":0,"datePublished":"2012-06-14T08:15:11.000Z","url":"https://community.spiceworks.com/t/vmware-hosts-network-ports/148908/9","author":{"@type":"Person","name":"keith22","url":"https://community.spiceworks.com/u/keith22"}},{"@type":"Answer","text":"
I would use port 3,4 from NIC1 and 1,2 from NIC2 for iSCSI.<\/p>\n
This way you have redundancy in case one of the card fails you wont lose storage communication.<\/p>\n
And use the other NICs as per your application needs.<\/p>","upvoteCount":0,"datePublished":"2012-06-14T08:15:28.000Z","url":"https://community.spiceworks.com/t/vmware-hosts-network-ports/148908/10","author":{"@type":"Person","name":"redovenbird","url":"https://community.spiceworks.com/u/redovenbird"}},{"@type":"Answer","text":"
The awesome part is that as long as your path redundancy is working, you can make that change during business hours. Simply leave one of your vmkernels and the bound NIC on the vswitch. I did a test case at one of our sites, and then converted the rest of our sites within an hour.<\/p>\n
Of course, I was also under pressure at the time as well, as our SAN’s were timing out from too many open connections due to the related esxi5 bugs.<\/p>","upvoteCount":0,"datePublished":"2012-06-14T08:30:29.000Z","url":"https://community.spiceworks.com/t/vmware-hosts-network-ports/148908/11","author":{"@type":"Person","name":"alaricwhitney","url":"https://community.spiceworks.com/u/alaricwhitney"}},{"@type":"Answer","text":"
That is good to know but we just got done with some long downtime as the last admin had all 10 ports in an single vSwitch with vMotion, iSCSI, Management all configured on 1 VMkernal and our Production network sharing the same vSwitch.<\/p>\n
We crashed hard and required basicly re-doing everything from scratch. We were down 9am-3pm while VMware rebuilt our network in ESX. I learned a lot that day and am continuing to learn as we go as there is no budget for any training.<\/p>","upvoteCount":0,"datePublished":"2012-06-14T08:48:36.000Z","url":"https://community.spiceworks.com/t/vmware-hosts-network-ports/148908/12","author":{"@type":"Person","name":"keith22","url":"https://community.spiceworks.com/u/keith22"}},{"@type":"Answer","text":"
AlaricW wrote:<\/p>\n
\n2 separate vSwitches (1 VMKernel each with 1 iscsi NIC each, you can no longer have more than one NIC per iSCSI vmkernel, or else you run the risk or crashing your SAN with too many open connections).<\/p>\n<\/blockquote>\n
My symmetric SAN laughs at puny path limits! (see attached picture, of a ridiculous 80 paths per VM host configuration with only 10 LUNs).<\/p>\n
You still separate out vKernel’s into separate switches, but outside of it causing issues with EqualLogics I wasn’t aware of any other SANs blowing up from too many connections. The real reason in my mind is it fully isolates you from faults. If one Kernel gets a bad SCSI command and goes crazy, or one switch fabric blows up from a bad Spanning tree setting your entire environment will not come crashing down.<\/p>\n
<\/p>","upvoteCount":0,"datePublished":"2012-06-14T14:34:52.000Z","url":"https://community.spiceworks.com/t/vmware-hosts-network-ports/148908/13","author":{"@type":"Person","name":"StorageNinja","url":"https://community.spiceworks.com/u/StorageNinja"}},{"@type":"Answer","text":"
MHCKeith wrote:<\/p>\n
\nWe have 10 Ports and this is how we were configured by some VMware Support Agents according to best practices.<\/p>\n<\/blockquote>\n
I think the reason you don’t want to do this is full redundancy in the paths, to the point of keeping a vSwitch failure from bringing down your storage.<\/p>","upvoteCount":0,"datePublished":"2012-06-15T16:56:48.000Z","url":"https://community.spiceworks.com/t/vmware-hosts-network-ports/148908/14","author":{"@type":"Person","name":"johnwhite","url":"https://community.spiceworks.com/u/johnwhite"}},{"@type":"Answer","text":"
Hi AlaricW, do you have a link to a document from Dell/VMware on the change to separate vSwicthes for iSCSI??<\/p>","upvoteCount":0,"datePublished":"2012-06-27T13:47:52.000Z","url":"https://community.spiceworks.com/t/vmware-hosts-network-ports/148908/15","author":{"@type":"Person","name":"jorgedelgados8632","url":"https://community.spiceworks.com/u/jorgedelgados8632"}},{"@type":"Answer","text":"
Jorge.Delgado@CR wrote:<\/p>\n
\nHi AlaricW, do you have a link to a document from Dell/VMware on the change to separate vSwicthes for iSCSI??<\/p>\n<\/blockquote>\n
This has been best practices for quite some time. Separate vKerenel, and vSwitch for each iSCSI connection has been standard as long as I’ve been doing iSCSI (4.0).<\/p>","upvoteCount":0,"datePublished":"2012-06-27T15:54:50.000Z","url":"https://community.spiceworks.com/t/vmware-hosts-network-ports/148908/16","author":{"@type":"Person","name":"StorageNinja","url":"https://community.spiceworks.com/u/StorageNinja"}}]}}
I was hoping I get some opinions on what would be the most realistic setup in the following situation.
We recently ordered some quad port network cards (Intel I350-T4) for our virtual hosts (HP DL360 G7’s running esxi 5.0 U1). Our hosts connect to our SAN (HP P2000) via iSCSI.
Now my question is how many ports do you think I should use for IP traffic, and how many for iSCSI? There are 8 total ports on each host now, 4 onboard, 4 from the new NICs.
Any opinions would greatly appreciated.
6 Spice ups
keith22
(keith22)
June 14, 2012, 6:45am
2
We have 10 Ports and this is how we were configured by some VMware Support Agents according to best practices.
MHCKeith wrote:
We have 10 Ports and this is how we were configured by some VMware Support Agents according to best practices.
The funny thing is that your configuration is no longer supported by VMWare as of ESXi5.
Here’s what we use as per VMWare’s current best practices:
2 separate vSwitches (1 VMKernel each with 1 iscsi NIC each, you can no longer have more than one NIC per iSCSI vmkernel, or else you run the risk or crashing your SAN with too many open connections).
1 vswitch with at least 2 (or more) NIC’s for normal data
1 vswitch with 1 vmkernel for vmotion with at least 2 NICs
1 vswitch with 2 NIC’s for your DMZ network.
keith22
(keith22)
June 14, 2012, 7:35am
4
We are setup the same. While the iSCSI are on the same vSwitch, they are 1-to-1 bound to an adapter and then the iSCSI Initiator is bound to each of those VMkernel Adapters.
keith22
(keith22)
June 14, 2012, 7:36am
5
Man I wish we could attach more than one item to each post.
keith22
(keith22)
June 14, 2012, 7:38am
6
I guess I should also mention that we are running ESXi5 and this was setup by a VMware Engineer knowing we are running ESXi5.
What I was advised to do by a friend that also uses vmware is to just create two vSwitches.
vSwitch1: vmkernel for iSCSI and vMotion
vSwitch2: management and data
With each switch having 4 ports for load balancing.
Yes, I assumed you had a single NIC bound to each VMKernel. We used to be setup that way (As that used to be the best practice), but both Dell and VMWare storage pointed us away from that, and to the newly updated best practices documentation.
With ESXi5, you need to separate out each of your iscsi VMKernels onto it’s own vswitch. ESXi5 has a bunch of issues/bugs when you have 2 iscsi vmkernels in the same vswitch (vmware still tries to use the unbound NIC, causing all sorts of trouble, which we unfortunately ran into as well).
VMWare has released several patches trying to resolve these issues, however after a little bit of googling, you’ll find that people are still running into the issues with the old configuration. After we separated out the VMKernels onto individual vSwitches, all of our problems went away.
keith22
(keith22)
June 14, 2012, 8:15am
9
That is good to know. I will have to look at changing that in our next scheduled down time. Thanks!
I would use port 3,4 from NIC1 and 1,2 from NIC2 for iSCSI.
This way you have redundancy in case one of the card fails you wont lose storage communication.
And use the other NICs as per your application needs.
The awesome part is that as long as your path redundancy is working, you can make that change during business hours. Simply leave one of your vmkernels and the bound NIC on the vswitch. I did a test case at one of our sites, and then converted the rest of our sites within an hour.
Of course, I was also under pressure at the time as well, as our SAN’s were timing out from too many open connections due to the related esxi5 bugs.
keith22
(keith22)
June 14, 2012, 8:48am
12
That is good to know but we just got done with some long downtime as the last admin had all 10 ports in an single vSwitch with vMotion, iSCSI, Management all configured on 1 VMkernal and our Production network sharing the same vSwitch.
We crashed hard and required basicly re-doing everything from scratch. We were down 9am-3pm while VMware rebuilt our network in ESX. I learned a lot that day and am continuing to learn as we go as there is no budget for any training.
AlaricW wrote:
2 separate vSwitches (1 VMKernel each with 1 iscsi NIC each, you can no longer have more than one NIC per iSCSI vmkernel, or else you run the risk or crashing your SAN with too many open connections).
My symmetric SAN laughs at puny path limits! (see attached picture, of a ridiculous 80 paths per VM host configuration with only 10 LUNs).
You still separate out vKernel’s into separate switches, but outside of it causing issues with EqualLogics I wasn’t aware of any other SANs blowing up from too many connections. The real reason in my mind is it fully isolates you from faults. If one Kernel gets a bad SCSI command and goes crazy, or one switch fabric blows up from a bad Spanning tree setting your entire environment will not come crashing down.
johnwhite
(John White)
June 15, 2012, 4:56pm
14
MHCKeith wrote:
We have 10 Ports and this is how we were configured by some VMware Support Agents according to best practices.
I think the reason you don’t want to do this is full redundancy in the paths, to the point of keeping a vSwitch failure from bringing down your storage.
Hi AlaricW, do you have a link to a document from Dell/VMware on the change to separate vSwicthes for iSCSI??
Jorge.Delgado@CR wrote:
Hi AlaricW, do you have a link to a document from Dell/VMware on the change to separate vSwicthes for iSCSI??
This has been best practices for quite some time. Separate vKerenel, and vSwitch for each iSCSI connection has been standard as long as I’ve been doing iSCSI (4.0).