Hi Guys,
I have a brand new array in a Synology box (RS2414+), and want some advice regarding the ‘best’ way to achieve my goals
What I need to know is “Should I use a single iSCSI target with 4 LUNs or should I use Four iSCSI targets with one LUN each?”
What I want to do with the new array is put four iSCSI ‘shares’ on there so that four stand-alone HyperV hosts (no clustering etc) can get an additional Hard disk volume each. One the new volume I want to store VHDs which store almost static data, which is rarely accessed. So any speed penalty is insignificant.
5 Spice ups
If it’s FOUR hyper-V hosts and they are NOT clustered, you’ll need FOUR LUN’s.
You should have TWO iSCSI targets on the SAN which will be different network ports going through dedicated switches for MPIO.
Can I ask why iSCSI and not SMB3?
1 Spice up
tobywells
(toby wells)
3
iSCSI seem like a lot of complexity for no gain, agree with Gary use SMB3
2 Spice ups
Nothing wrong using ISCSI over SMB3. I prefer each target for each host. If you have a host require multiple disks, create equal no of LUN’s on a single target. In your case create 4 target and maps to each host, and create one LUN on each target. If the host require two disks, create two LUN on that specific target.
This will help us to find the disk easily, and also will helpful for creating image / vm backup softwares like Veeam or Vembu to include the important disks to backup / restore.
1 Spice up
True, except that in this case, the Op has a device that is better used as a NAS and most likely doesn’t have a SAN infrastructure to run iSCSI as best practice. Hyper-V works will with SMB3 so why not go that route and simplify?
1 Spice up
Veeam is not going to care where the data is, as long as it can access it.
2 Spice ups
iSCSI and FC should have similar path designs.
In your scenario
iSCSI initiators are your hosts
iSCSI targets are your SANs
.
Each SAN will need a minimum of 1 iSCSI target per fabric it connects to
Each iSCSI target should reside in a different subnet
Each fabric should be separate.
Once you have your redundant paths from your hosts to your SAN, you then provision your LUNs, as you are not having clustering and all hosts are standalone, you will need 1 or more LUNs per host (depending on your requirements). On a side note I try and keep LUNs to 10TB or smaller.

The guys rightly mention SMB3, I would go and do some reading about it before making a decision.
It does support “Multichannel” however you should still have a fabric design as above.
2 Spice ups
Because of the hardware he’s using?
In part yes, and in part because of his experience level
1 Spice up
Thanks for the feedback guys,
Some good points were raised.
With simple (unclustered) servers, yes I went for four LUNs, each with a separate target. I was a little taken aback by how easy it was to set up (I did include iSCSI authentication to prevent unwanted access). And that appears to work well, Veeam 9 can backup and restore VHDx ‘drives’ stored on an iSCSI ‘Disk’. However Veeam cannot successfully ‘rescan’ the servers. I have this issue with Veeam right now, and it is to do with the identifier for the SCSI disks being longer than they had allowed for. From what I understand this will not impact on backup/restore. Speed on backup/restore does not seem to be affected by the iSCS location, despite the additional network traffic, and the restored VHDx (150Gig) appears to be perfectly readable.
I did look at SMB3 (the NAS box setting), but from what I can see the NAS box needs to be ‘adopted’ into the AD, and that, for other reasons is something I did not want to do.
As an aside, I also did a bit of replumbing of the network while I had an opportunity. The NAS box now has ‘bonded’ (Synology) or ‘LAG’ (HPE) pair of 1Gig, and the HyperV Hosts (Win 2012) have their Management and Guest Networks on separate NICs. Yes I can hear people saying “about time too” (and I totally agree). All servers and the NAS box connect to a common HPE 1920G switch. Users connect to satellite switches. All copper, (and some Wireless)
So many thanks for your input guys, and maybe this is not the optimal setup, but it is working reliably and at a speed that the users do not notice (and that counts for a bit in my book). I need to keep things as vanilla as possible! Oh, and this is my first foray into iSCSI, and the budget etc precludes SAN!
1 Spice up