\nwhy people bother with multiple hot spares anymore<\/p>\n<\/blockquote>\n<\/aside>\n
long weekends<\/p>","upvoteCount":0,"datePublished":"2025-04-15T13:16:05.361Z","url":"https://community.spiceworks.com/t/8-x-3-84tb-enterprise-ssd-raid-1-0-or-raid-60/1195889/17","author":{"@type":"Person","name":"paprika-1977-12","url":"https://community.spiceworks.com/u/paprika-1977-12"}},{"@type":"Answer","text":"\n\n
<\/div>\n
Afroz (Stellar):<\/div>\n
\nregardless of RAID type, it’s not a substitute for proper backups<\/p>\n<\/blockquote>\n<\/aside>\n
unfortunately for me the management team got that message loud and clear. I have a disaster recovery cloud connected appliance and I also maintain Veeam backup & recovery. so I have physical appliance with backup chains, then cloud replicated. I then have separate onsite backups with backup copies and an air gap. At least I can’t say I don’t have anything to keep me busy.<\/p>","upvoteCount":0,"datePublished":"2025-04-15T13:20:04.422Z","url":"https://community.spiceworks.com/t/8-x-3-84tb-enterprise-ssd-raid-1-0-or-raid-60/1195889/18","author":{"@type":"Person","name":"paprika-1977-12","url":"https://community.spiceworks.com/u/paprika-1977-12"}},{"@type":"Answer","text":"
That’s great—you’re taking the 3-2-1 backup strategy seriously, and having a cloud backup is the cherry on top!<\/p>","upvoteCount":0,"datePublished":"2025-04-16T06:02:17.271Z","url":"https://community.spiceworks.com/t/8-x-3-84tb-enterprise-ssd-raid-1-0-or-raid-60/1195889/19","author":{"@type":"Person","name":"afroz-stellar","url":"https://community.spiceworks.com/u/afroz-stellar"}},{"@type":"Answer","text":"
I would never do RAID 60 with just 8 drives. 80 drives, sure, but not 8.<\/p>\n
So, that leaves RAID 10, 5, and 6 as possibilities. If you don’t need the space and are using weak drives that you need to absolutely minimize the drive writes, then RAID 10 is fine. RAID 5 should be good too, with or without a hot spare. If you needed the highest uptime, RAID 6 will allow 2 concurrent drive failures, where RAID 10 has a 14.3% chance of total data loss when a second drive fails in an 8 drive array (1/7).<\/p>","upvoteCount":2,"datePublished":"2025-04-16T07:02:07.423Z","url":"https://community.spiceworks.com/t/8-x-3-84tb-enterprise-ssd-raid-1-0-or-raid-60/1195889/20","author":{"@type":"Person","name":"kevinhsieh","url":"https://community.spiceworks.com/u/kevinhsieh"}}]}}
I was not even thinking RAID 60 until I saw that the resulting capacity is about the same. Would there be any reason you would choose RAID 60 over RAID 1+0? The reason I stopped doing RAID 5 once I hit 1.92TB SSD in favor of RAID 1+0 was rebuild times. Probably in this case half the contents are going to be production file server and half will be first landing spot for other host VM backup files.
1 Spice up
Rod-IT
(Rod-IT)
April 12, 2025, 5:57pm
2
Then you don’t want RAID60, rebuild times are not comparable to RAID10 (RAID1+0).
RAID10 only needs to rebuild the drive it mirrors with, RAID60 has to reconstruct the data from all other RAID60 drives, computationally this is heavy.
It’s not ideal using the storage for both production and a backup of something else, besides, how will you present this, through a VM?
I run 8 SSDs in my lab, I also have NVMe but for the SSDs it’s RAID10.
3 Spice ups
I would think that it depends on per use case ?
Why would you even need to go past RAID 5 or RAID 6 for SSDs (with options for 1 or 2 hot spares) ?
The rebuild times is literally “capacity/transfer” rate for both RAID 5 & RAID 10 for SSDs which is highly defendant on your RAID adapter and type SSD media used.
Based on Google search for rebuild times using 1.92TB SSD, it says RAID 5 may take 24-28 hours while RAID 10 may take 38 hours, 15minutes ?
I would place more importance on hot spares for rebuilding and if you have some sort of remote access like iDRAC or iLO to start isolating failed media and start rebuilds remotely ?
Then maybe a secondary importance on cold spares if I can replace the faulty media ASAP ?
1 Spice up
For backup landing operations specifically, RAID 10’s write performance advantage is particularly valuable, as these operations tend to be write-intensive. The reliability of your production file server data will also benefit from RAID 10’s full redundancy during rebuild operations.
2 Spice ups
There are always 3 sides to the coin ?
If need like based 12TB to 15TB of storage requirements, the differences in cost of RAID 10 vs RAID 5 can easily get higher performance SSDs (like vSAS SSDs or SAS SSDs & Mix Use & Write Intensive SSDs over Read Intensive SSDs) or a higher performance RAID adapter ?
It also depends on how you look at it. RAID 10 only allows single media failure per RAID 1 set. Much unlike RAID 6 that allows redundancy of 2 failed media.
In extreme cases or cases of old servers when the servers get older (like in 3 years or 5 years time), the wear & tear of the storage media is much like tires of a car… when 1 is worn out, the rest is usually wearing out as well ?
That is the reason behind certain systems having the capability of using 2 hot spares.
There are only a handful of systems that actually support 2 hot spares & 1 cold spare within the box with capabilities of powering up the cold spare as well ?
1 Spice up
I don’t know what the limit is on hot spares. I have as many as 3 on some arrays.
Rod-IT:
It’s not ideal using the storage for both production and a backup of something else, besides, how will you present this, through a VM?
All of my Veeam servers are virtual servers tucked in on various hosts. Trying to get down to just 2 hosts per site, which means it has to live on the secondary production server. Since I have to have a backup proxy anyway for maximum speed for backups I will also make my first landing drop of the opposite host backups to a virtual disk on that backup proxy VM.
The backup copy jobs then run and drop extras to some old file server and/or NAS device. It sounds too busy but it is really fast with 10Gb Ethernet, SSD arrays, and a small footprint to start with.
EDIT: I used to just replicate everything between hosts back in the day, but now am down to just doing replica also of 1 or 2 critical machines.
There’s a very important thing to note here for those not aware, and it’s easy to miss. “RAID 10 only allows single media failure per RAID 1 set” The more raid 1 sets you have the safer it gets.
For example, if you had 8 drives in a raid 1 (excuse my hasty draw.io diagram, I didn’t just want to swipe a diagram from some other site.)
You can lose A1, B2, C2, D1 and the array still functions. So while RAID 6 tolerates 2 drive failures and RAID 60 tolerated 4 failures, in this example Raid 10 also tolerates 4 failures. If however you lost B1 and B2 (2 failures in the same Raid 1 set), you would lose the array.
The other thing that’s important is capacity reduction and rebuild time.
If I have 30 drives in a Raid 60, I have 26 drives of capacity. If I have 30 drives in a raid 10, I have 15 drives of capacity. Raid 60 wins on capacity. However in your case of an 8 drive array, Raid 10 is 4 drives of capacity, and raid 60 is also 4 drives of capacity. You might as well go Raid 6 here instead of 60 for this size array and regain 2 drives of capacity.
To rebuild a failed drive on an 8 drive RAID 60 array, the array has to process all of the drives in the RAID 6 set. The more drives, the longer the rebuild, sometimes days for large RAID 60 arrays. In a raid 10, it only has to re-mirror 1 drive, regardless of the size of the array.
Some other considerations:
Parity RAID adds more wear and tear to flash media than raid 10.
In a RAID array with flash media, you have the risk of many drives reaching end of write life at similar times.
And the one precautionary warning for those that don’t know:
RAID is not backup. You still have to have backups even if you have raid. Not implying anyone in this thread doesn’t understand that, but we do encounter people who don’t understand this point on a regular basis, so it’s worth mentioning for others that might come across the thread.
4 Spice ups
I currently have these in RAID 60 for testing, but will redo them as RAID 1+0 for production because the rebuild time combined with a chance of less wear wins.
Thank you for repeating the point on RAID is not a backup. I have come from those shops. It is sometimes missed for too long that when you are going from no redundancy some redundancy that you still need to have a backup. Also, sometimes missed that it might be better to do the backup system first, then do a system with more redundant parts after that is well established.
I have had to flat out argue with people on that point where they insist they don’t need backups because they have raid. They are so confident in that point as well. The way to win the argument: Ransomware encrypted the storage volume. How does raid help you recover from this?
2 Spice ups
Unless it is a NAS or Storage appliance. Most servers and/or SAN may just have or recommends at most 2 hot spares. Then there is always the debate hot spares vs cold spares as some would say that hot spares are still “running”…
Not even sure why people bother with multiple hot spares anymore unless their servers are so physically far from them that it would be impractical to have hands on in a timely fashion (as in same day). Keep extra cold spares on site. Then you replace the failed drive with the cold spare after rebuild and it becomes the new hot spare.
3 Spice ups
Coz maybe a hot spare is just a small fraction in the cost of the server (hardware + software + applications) ?
Then we can trigger rebuilds remotely ?
1 Spice up
Right, have a hot spare and keep the rest cold. Not sure why you would need to keep multiple hot spares taking up storage slots and keeping them powered on. Trigger the rebuild, replace the failed drive with one from the cold spare supply, RMA the failed drive, then return it to the cold spare supply.
2 Spice ups
For me it was the SOP from Dell EQL storage where it uses 2 hot spares + 1 Global hot spare (as Dell EQL can “stack” multiple units to form single SAN storage.
So servers usually I get only max of 2 hot spares and to use RAID 6.
We did so many tests with regards to performance of RAID levels vs performance using different types of SSDs in RAID 5.
@adrian_ych @PatrickFarrell — Good points from both of you.
Adrian, you’re right. When the storage requirement is in the 12–15TB range, the cost difference between RAID 10 and RAID 5/6 can make it more practical to invest in better-performing SSDs or RAID adapters instead of going with a more redundancy-heavy RAID level.
Patrick, your example on RAID 10 failure tolerance is accurate and often overlooked. As you mentioned, multiple drive failures are possible to survive in RAID 10 if they don’t happen within the same mirror pair. That’s important when weighing reliability in real-world conditions.
Also agree on rebuild times. RAID 10 has a clear advantage here. Rebuilding a single mirror is much faster and less stressful on the system than recalculating parity across several drives in RAID 5/6.
And yes, regardless of RAID type, it’s not a substitute for proper backups. That still needs to be in place.
1 Spice up
unfortunately for me the management team got that message loud and clear. I have a disaster recovery cloud connected appliance and I also maintain Veeam backup & recovery. so I have physical appliance with backup chains, then cloud replicated. I then have separate onsite backups with backup copies and an air gap. At least I can’t say I don’t have anything to keep me busy.
That’s great—you’re taking the 3-2-1 backup strategy seriously, and having a cloud backup is the cherry on top!
kevinhsieh
(kevinmhsieh)
April 16, 2025, 7:02am
20
I would never do RAID 60 with just 8 drives. 80 drives, sure, but not 8.
So, that leaves RAID 10, 5, and 6 as possibilities. If you don’t need the space and are using weak drives that you need to absolutely minimize the drive writes, then RAID 10 is fine. RAID 5 should be good too, with or without a hot spare. If you needed the highest uptime, RAID 6 will allow 2 concurrent drive failures, where RAID 10 has a 14.3% chance of total data loss when a second drive fails in an 8 drive array (1/7).
2 Spice ups