Hi guys,

I’m very familiar with iSCSI, SAS & Fibre Channel SAN connectivity, but I’ve just had a question asked of me and I want to confirm before I start planning anything in production.

One of our clients has a simple setup, they have two ESXi 5.5 hosts with 6 nics. We have a separate Juniper EX-2200 switch we’re using for iSCSI, and the SAN is an iSCSI (1GB) connected IBM DS3300 (yes I know this is a bit long in the tooth, but its what we have to work with as of now).

What my client now wants to do is connect another iSCSI SAN to this setup. Here is my thought process around the different components, and whether I’m going to run into any hassles at any stage…

  1. I have plenty of switch ports on my dedicated iSCSI switch, so no issues there

  2. I’d like to only keep using two dedicated NICs for iSCSI on each server; at what point will these two NICs get saturated? I guess this questions comes down to how much kload I’m putting through. Let’s say on SAN 1 I have around 12 production VM’s running over both hosts. 6 DC’s, a couple of workstations used as demo boxes, a vCenter Server and a backup server and a SAN management server. The other SAN has a file server sitting on it with apprx 5000 users sitting behind it pretty much putting it over constant load all day.

  3. The new SAN is simply another iSCSI target, so should I be looking at some new dedicated physical NICs to use, or are the two NICs I’m already using for iSCSI going to be fine? At what point will 1GB NICs get saturated?

I know without actual data it’s difficult to say, but can anyone put their finger in the air and give my some recommendations, and maybe some info on anything I haven’t thought of?

Cheers

3 Spice ups
  1. Single switch? That’s a single point of failure.

  2. That all depends on how much data you pump down it. The only way to tell is to monitor it and alarm if it gets over 80% or so for any length of time.

  3. How busy are the current iSCSI nics? how busy is the switch backplane?

2 Spice ups

Hiya Gary,

  1. Yes true, a good point we’ve been trying to bring up with the client. Hopefully this will get included in their budget.

  2. What would be some good metrics to monitor here, and how would you recommend monitoring it?

  3. How would you recommend monitoring both of these?

Sharing NICs for iSCSI is fine as long as the bandwidth is enough for all of the usage… which is true even if you aren’t sharing them.

1 Spice up

I use observium as it can monitor all ports on the switches. This is an example of a full backup of my lab taken on sunday.

2 Spice ups

That EX is actually pretty good on PPS and buffers. That DS3300 is pretty damn old.

ESXi has no problems talking to multiple arrays.

There’s a couple ways to throttle things…

  1. In the network (Use CoS Tags).

  2. In ESXi (6.5 with SIOCv2, can do QOS, not based on throughput but do some.)

When using new NIC’s make sure to use NIC bonding if on the same L2 network. If using seperate subnets don’t worry about it, and if your routing iSCSI (Don’t with those switches) you’ll need 6.5 to do NIC bonding (Now supported!).

10Gbps networking is cheap, and my team mates run it in their home labs, so I’d argue people with that many users should be seriously considering it if for no other reason than operational simplicity.

2 Spice ups

Well, 10GbE is relatively cheap. :slight_smile: Isn’t the dirt-cheap switching stil $100/port?

I agree with StorageNinja, though. If you’re investing at all in modern networking, 10GbE is par for the course.

1 Spice up

Think about pitching them on two new hosts with 2-node vSAN. Your project would expand to the modernization of their vSphere, compute, and storage. Their older hosts could stick around as DR members or backup targets, depending on how old they are.

1 Spice up

If you’ll do a QoS and VLANs there’s no issue at all.

1 Spice up