Hey Everyone,

We currently have a server setup for VM hosting internal business virtual servers that is EOL and needs a hardware refresh and updated software. Our current system is a Dell VRTX with two M630 blade servers. The two physical blade servers are our Windows server Hyper-V hosts, and they have their own 2 drives for their storage. The VRTX has attached storage drives for all the VM storage. The 2 hosts act as failovers and will move all vm’s to the other if 1 goes down. VMs are internal SMB file share (almost 10TB), VPN, domain controllers, print server, sql server, and terminal server for RDP. So a mix of things supported.

I’ve been looking into doing a similar setup, unfortunately the dell options for a scalable combined unit like this seems to have been depreciated. I figured doing two physical servers and connecting both to a JBOD to act as the storage for all the VMs was the next best option.

But figured I’d ask if this the best way to go about this? Any other options I should look into or better methods of doing a similar thing? Would like to increase and have a 10G connection for users working off of network share. I wouldn’t mind moving more hybrid and or cloud, but I’ve done some look into into cloud hosting and the preliminary quotes I’ve been getting are over $20k a year for everything. 2-3 years of that would pay for a whole new system and licensing for the next decade.

Any other things to think about, or recommendations for hardware? I was eyeing thinkmate for the hardware.

Thanks!

8 Spice ups

While there are many other options, I think what you’re looking at is a pretty good call for the money involved. Your usual players (Dell, HP, Lenovo, etc.) for the compute portion and likewise for the storage (Synology, QNAP on top of the previous list).

3 Spice ups

Have you tested this before ?? We had 6 units of VRTX, using both VMware Std and Server 2016/19 with hyper-v role…non actually worked.
Unless you really need and/or have tested FT (automated) to be working ?

Nowadays Dell 1U servers have 8 slots, then the average SSD sizes is usually 1.92TB or 3.84TB…so even RAID 6 using 6 SSDs with 2 hot spares (1.92TB SSDs) would be able to provide you with 12TB raw space. If using 3.84TB SSDs, you can get 24TB raw space ?

Then maybe leverage on backup solutions like Veeam Backup & Replication 12.x to backup and even replicate critical VMs to the 2nd host ?

The other question is if you can split your “files” into

  • user profiles (for Terminal Server)
  • user data (Terminal Server users)
  • file server (critical files, usually only MS Office files & PDFs)
  • non critical files (to be stored on NAS)

$20k a year seems low especially with 10TB file server ?

Then there are a few things to note, depending on the cloud provider you choose…

  • data egress charges ?
  • Internet connectivity on both ends and VPN on both ends ?
  • Certain routing features or routing requirements (maybe payable)
  • Limitations on OS versions (and probably SQL versions) as when the cloud says EOL and/or end of support for OS, it is a hard limit.
  • Transfer of Terminal servers to cloud based VDI or cloud workspace
  • Domain Controllers or Domain Controller services on the cloud
  • Some licenses like OS and SQL etc may need SA or OS mobility in order for BYOL or usage on cloud
1 Spice up

Thanks @Jay-Updegrove, I appreciate the feedback and second opinions!

@adrian_ych

Yes, we have tested and it does work. As long as the roles are set up in failover cluster manager, they migrate to the other host instead of the owner node/host. And you can’t make the vm through Hyper-V first. You can live migrate the VMs as well if you need to do maintenance on one of the host machines. Though I try to limit that/ not push my luck and shut them down before migration usually.

Yeah debated on swapping and making two hosts with their own storage. Think I’m just spoiled with the VRTX. IT makes it pretty nice with the attached storage, and ability to manage all the systems independently throught the VRTX manager.

Well it probably was low. And I was taking those prices with a large grain of salt. I had gotten very preliminary quotes to see if it was feasible. I was working with some reps through the respective cloud hosts, and they were adding startup incentives so that may add to it. But didn’t make sense to keep diving into it if the TCO was too high in rough numbers.

But the additional points you brought that increased the complexity added why I was deciding to stay on prem too with cost as nice as cloud or hybrid sound. We’re a small team for all that. Definitely need to see what we can migrate off our SMB and just keep in archive storage. Thanks for the input!

1 Spice up

Another point about mass cloud storage, is it’s mostly meant to hold backups in bulk…so the upload fees are pretty minor. But, they get you on the backend with egress fees (so, if you ever needed to pull that data back down and recover a server, for example…) always check your fine print!

1 Spice up

I meant like what if the host or whole VTRX dies suddenly (like RAM fault, mobo fault or power fault) where there is not enuff time for vMotion ?

In the “perfect setup” if you have a SAN or vSAN and VMs are stored on these storage, when the host dies, another host (using VMware DRS or Veeam Replication etc) can register this VM, power up this VM on another host.

But in your case or the setup I suggested, you may need to add in a vSAN (or convert some local storage into a vSAN).

2 Spice ups

If the host dies before you can move VM’s, you take the snapshots of those VM’s that you take daily with whatever backup solution you choose, and you restore them to the new host when you get it, or to whatever other hose is in your cluster and has space…backing up the host doesn’t backup the VM’s.

1 Spice up

Exactly as Jay mentioned. We have a separate backup appliance that snapshots the systems each hour so we can export the vm’s for a new host or even host a live version on the backup machine in the meantime while recreating host machines if needed. And those are backed up to an offsite cloud storage too incase full site loss. The VRTX is on extended warranty for next business day support if something goes wrong it.

1 Spice up

I think you mean backup data sets, not snapshots as nowadays snapshots usually mean the rollback function in VMware or Hyper-v servers or server 20xx with hyper-v role ?

1 Spice up

Coz the Dell VRTX has 3 different approaches to storage…

  • cheapest is to have storage on the blades
  • next is to have a JBOD so that it acts like extended storage using eSTAT or eSAS to the blades
  • EQL PS4xxx where one slot is used as a Dell EQL SAN, such that the blades can use it like a SAN, all the blades see the “shared” SAN storage. This is where we can use VMware “diskless” ESXi (ESXi boots on RAID 1 SD cards). This was also available in the Dell M1000 Blade Chassis.

Using a SAN would mean that the storage is shared, only CPU and RAM is from that host. So when that host goes down, the VM will get an abnormal shutdown and starts up in another host (admins will get a unexpected shutdown notice in Windows Server OSe) while worse you get BSOD…

The alternative is like you mentioned to use software based VM replication to replicate VMs to another host (like Veeam Backup & replication) to replicate VMs at a certain interval (except for DCs as they have their internal replication already).

1 Spice up

Your dual-server + JBOD approach makes sense since Dell killed off the VRTX line. It’ll get you the failover you need without breaking the bank.

Skip the JBOD and use Storage Spaces Direct instead. Two servers with local drives, Windows handles the replication automatically. No single point of failure like you’d have with shared JBOD storage, and usually cheaper too.

The two servers, shared JBOD, and 10Gb networking is probably the best option if we want to stick with something familiar and reliable. HCI is worth a look if we’re thinking long-term and want to simplify hardware. Let me know if you want to go over hardware options—I’ve been looking at Thinkmate and a few others.

1 Spice up

Not since Wasabi shook up that market with no egress fees and others were forced to follow suit.

1 Spice up

I was so happy when a company finally did that…it’s made a major market shift for sure! However, some baseline “tiers” still have egress fees in the fine print. Always read the fine print…then check DNS…