I am looking to possibly build my own server. I am looking at building a SuperMicro platform, and what I cannot figure out, is if they support using the M.2 in a RAID configuration to boot the OS, or is this only supported for storage? I am trying to mimic the BOSS RAID that Dell Servers use.

https://www.supermicro.com/en/products/system/clouddc/2u/sys-621c-tn12r

And what is a VROC HW key? And is that supplied by SuperMicro?? Similar to an old school software license dongle??

  • NVMe; RAID 0/1/5/10 support(VROC HW key required)
7 Spice ups

Yes, Supermicro platforms like that listed can support booting from M.2 drives in RAID, but only under specific conditions:

  • The system supports Intel VROC (Virtual RAID on CPU) for NVMe RAID.
  • To enable RAID boot support from M.2 NVMe drives, you need:
    • Intel Xeon Scalable CPUs (which this system supports).
    • VROC-compatible M.2 slots (check motherboard specs).
    • A VROC hardware key.

This setup is similar in concept to Dell’s BOSS card, which uses two M.2 SSDs in RAID 1 for OS boot.

VROC = Virtual RAID on CPU.

If you want anything other than RAID0 (no RAID / stripe) you need a HW Key which is a physical piece of kit.

What OS are you planning to use, is NVMe for boot necessary?

See above for description, and yes, it can be supplied by SuperMicro, usually at an additional cost. Consdier used or a reseller where this comes included.

3 Spice ups

Yes, it’s a Supermicro part.

If you look at the Parts List tab on that page, you’ll find them at the very bottom. There are 2 options, one supporting RAID5, and the other not.

2 Spice ups

I plan on using it for vSphere (if I keep it) or ProxMox. I think it is just a cleaner way to setup the server. This way I do not need any drives up front as everything is on the SAN. Other than that, I do not have any technical reason to do it this way.

1 Spice up

Neither really require RAID or NVMe as their boot drives though. ESXi runs directly from RAM, so once booted the only use the drive will get is for logs and crash dumps. I doubt, unless this is business production RAID would be required, especially if your VMs are elsewhere.

Just have good backups and you’ll be fine, NVMe and SSD have good lifespans, especially with low write usage.

If this is production, I wouldn’t (personally) use Supermicro given their history for a business machine.

Finally If everything is going to be on the SAN, why do you need a server with capacity for 12 LFF drives.

In case it helps, I’ve run both ESXi and Proxmox on single SSDs and NVMes at home, with higher IO than most SMBs for the last 15 years, I’ve never had any issues with the boot drives.

3 Spice ups

Then I would not bother with NVMe as ESXi can boot from SD cards…speed is one of the things least needed for both ESXi and Proxmox… or even server 20xx with hyoer-v role … unless you intend to reboot the host every few minutes or every few hours ?? Once the host is booted up and running, it literally does not need to use the storage other than writing logs…

3 Spice ups

I believe from 7.0u2 this hasn’t been recommended and for 8x it certainly isn’t.

Physical media like HDD, SSD or NVMe are recommended, but it doesn’t have to be RAID.

3 Spice ups

I was just spec’ing out a machine, you are correct, I do not need the 12 LFF’s up front, but I could not get what I wanted in a 1U, and I had not customized it yet - just looking.

1 Spice up

The RAID is just insurance if one of the devices fail, and that’s it. All of my servers are setup this way.

1 Spice up

I understand the benefits of RAID, what I’m pointing out, is it’s not essential, especially if your data is elsewhere. Surely you’d also have more than one host, meaning one down is a simple rebuild and you’re back. Two SSDs may also be cheaper and easier to run than buying another license for NVMe RAID, but the choice is yours. As noted, I wouldn’t use this hardware anyway.

For ESXi specifically, depending on version, you could look at SAN booting.

And what is it you’d want, many 1U support dual CPU and a ton of ram, is it for GPUs?
4U devices have better airflow.

1 Spice up

Request to include on of the following modules:

  • Intel® Virtual RAID on CPU (VROC) - Intel Only Module - RAID 0,1,10,5
  • Intel® Virtual RAID on CPU (VROC) - Standard Module - RAID 0,1,10
  • Intel® Virtual RAID on CPU (VROC) - Premium Module - RAID 0,1,10,5

I’m very happy with my Supermicro server but find best suited for situations where don’t need NBD support and plan to switch services to a different replica server in the event of an issue instead needing to fix that server quickly. Doesn’t have on site support where I am, but part and service are much more available in the USA.

1 Spice up

I know…but one of the main reasons was for the logs…not for speed ?

Some people build servers with like SSDs for OS with SATA HDD for data (file server or SQL for example). Then their reasoning (focusing only on OS) is that they do not want to wait 10 minutes for the OS to boot up ?
But how often do we reboot the servers ? Once every 2 weeks, once a month ?

If due to budget, I would rather have the OS on SATA HDD and data on SSDs (using the example above) ?

So as for OP requirements where he is literally looking at “diskless” hypervisor (taken from VMware 7.x and earlier), he may not need NVMe storage for hypervisor (host OS) and maybe just use SSDs if the server supports it, out of the box ? Maybe RAID 1 for redundancy thats all ?

1 Spice up

You are correct, it had nothing to do with speed and everything to do with reliability.

SD cards and USB flash drives don’t have the same write endurance as HDD/SSD/NVMe and therefore when logs are writing to these, the drives tend to fail much quicker.

I don’t get why people want fast boot times either, surely speed is for the VMs.

I expect the OPs reason for preferring NVMe over SSD or HDD is related more to the drives being internal and not visible.

While the OP stated

Which makes sense.

If the NVMe drives are the internal M.2 slots, these are NOT hot-swappable, so downtime to replace a failed drive would still be required.

1 Spice up

This is sometimes why I get so perplexed when people buy servers with RAID 1 SSDs (or NVMe) for OS and then add in SATA HDDs…worse is they added in like RAID 10 8x 1TB HDD for SQL or VMs storage.

Cost wise…maybe SSDs do cost more…but using approx 250GB for OS and 3TB for data, the RAID 5 SSDs would only cost like 15% of the server (as compared to 8%-11%) ?

There are some (but few) servers that do not have the options for hot-plug “primary” storage ?

1 Spice up

Internal slots, like M.2 are generally not hot-swap.

If the NVMe drives connect to the front bays (U.2, SATA, SAS etc), then they often are hot-swap (in an enterprise server). Internal slots have different use cases, generally security (they can’t simply be pulled out), plus in most cases M.2 drives are screwed in.

As for costs of drives - business forget that their data is one of if not THE most important part of the server, without their data, it doesn’t matter how many cores or GBs of ram it has, no data = no business.

Storage should not be where people cheap out. Less cores or ram should be considered to make sure the data and drives used is right before having oodles or spare resources that may never be used.

Also, for clarity, while OEM drives and enterprise drives are generally more expensive, that’s because they have longer warranties, better caching and more robust components.

It’s somewhat akin to tires on a car. All tires serve the same purpose, but a branded, more expansive tire is going to last longer and give better grip than an unbranded part-worn tire, that you get for half price.

2 Spice ups

But is this for a hobby, lab or Production ?

1 Spice up

My guess is production, since all VMs will run on a SAN.

Like HDD/SSDs mentioned above, I wouldn’t try and save money by making a custom server, it’s not the same as a custom PC and the issues you face are going to be specific to each piece of this. If the OP buys a better known brand, then everything is covered under the same warranty.

2 Spice ups

But this is a mixed case of not using the “cheaper” SSDs with maybe the provided RAID adapter (if any) or getting a RAID adapter from the brand and SSDs as well, but getting the NVMe boss card and NVMe drives ?

Coz almost all of the time if NVMe not already in use, the server should have back pane for HDD or SSDs with some RAID adapter ?

I have never worked with SM servers before…but so many posts in SP etc makes it feel like some sort of consumer grade server or PC-like server ?

1 Spice up

SSD/HDD are not relevant here, the OP wants to use 2 internal NVMe slots in a RAID1 setup, which will require an additional license. Personally apart from being able to boot quickly, I don’t see any benefits here.

But, the OP is free to do as they please.

2 Spice ups

Errrr…using NVMe needs licenses ??

1 Spice up