I have a EMC VNXe3150 and love it, Performs great! I was just talking to my EMC rep and he was suggesting and asking why the heck I use RAID10 on it and they always recommend RAID5 or RAID6 and never RAID10.

It’s a 48 drivers x 900GB in a RAID10

Read and Write latencies are:

Avg IOPs 1000-1250 during the day with bursts of 3000 (Sometimes 4000 )

Read 7-9MS

Write 2-5MS

What do you guys think am i silly to have this thing in a RAID10?

5 Spice ups

Short answer - no.

Longer answer - Ask your EMC rep about URE’s on RAID5 and ask them how many clients have lost data due to them. If he states that it’s not an issue because the array has a hot spare, smack him.

RAID6 is fine if you need slower storage due to it’s write penalty.

Right now, you have good performance, enough disk space and are happy. Why change things?

1 Spice up

Spinning Drives and EMC recommended RAID5?

At that number of drives that SPIN, RAID10 is the only reliable option. RAID6 with that many drives is insane.

RAID10 if it spins, no matter what the vendor says.

raid 10 will give you about 50% better performance and sometimes higher than raid 6 and raid 10 you can only lose 1 drive for sure with out data loss

raid 6 will give you much better capacity, almost double and dual disk protection but 50-70% performance drop

raid 10 est. iops = 4600+

raid 6 est. iops = 2300

iops above based on 70% rds and 30% wr

so with you burst workloads above in the 3000 - 4000 range, I would probably stick with R10

the vnxe 3150 has 1 Gb of write cache and 256 of read cache

1 Spice up

Your EMC rep is stuck in the past. The days of RAID 5 on spinning disks is long gone. Your RAID 10 is a good configuration.

Suggest to their boss that they either train the rep properly or get them replaced with someone who does, actually, understand RAID levels and the implications of drive failure.

1 Spice up

I’m going to guess that they recommend RAID5/6 because they assume with this level of equipment you understand backup and recovery polices, and are trying to maximize your space for you.

RAID5/6 also lend itself to you needing more disks for cold-spares, increased chances of losing one in a rebuild, wanting to upgrade to a ‘faster’ array because rebuild/write times are so slow… all things RAID10 have solved for you(at the expense of capacity).

Sounds like a sales rep telling you outdated best practices(willfully or through ignorance) and an effort to sell more, which is their job.

With that said, 900GB drives I don’t see as TOO risky in a RAID5/6, esp if you have dedicated hotspares and proper backups.

So with all the said it comes down to performance. If you are happy with the capacity then it’s Hakuna Matata brother.

DB, you just hate R6, admit it… every post you flush raid6 down the toilet and yet you have never experienced a systems that can give you 200,000 iops in a 2U chassis all on raid6…

1 Spice up

hmm thanks for all the advice! right now we have a total f 18.4TB and i am looking at needing to add another 5TB. So when he was mentioning the RIAD5/6 idea i was like waaah!

Performance wise i have never been able to slow this EMC down which is great! so i am very concerned with moving it to another RAID level.

I think from the posts above its a smart idea to keep it at RAID10

Well there are two major issues with RAID6 firstly parity second RAID 10 is far better for recover-ability.

He’s got approximately 41TB of space on this unit. In a parity array (assuming RAID6) sure he get’s two drives of failure from a URE.

But that’s 46 drives worth of chance to hit another URE and lose the array anyways.

That’s way to big, the chances only go up of exceeding the RAID6 viable option as you add more disk to the array.

yeah but if performance wasn’t an issue,… I would take the URE of R6 over R10 any day as it is guaranteed second disk failure protection and you don’t get that with R10

Also, its not like all the disk would be in one giant raid group so there would be at least 2 or 3 dual disk protection groups with R6 or the same for r10

Also as a side not, please make sure you have 1 or 2 hot spares so that your system can start rebuilding automatically if you do get a raid failure

@Mike Bailey

Short answer… If its not broke don’t fix it? nothing wrong with using RAID 10 unless your hitting some capacity limits.

.

Your IO requirements are low enough to use pretty much any RAID level.

If the bursts are write they will hit cache so no problem assuming you have enough cache, this is already happening as you cannot sustain a 2-5ms write latency on 10k media.

RAID 5/6 does not have much of a read performance hit in comparison to RAID 10.

.

@DustinB3403

URE is also a risk in RAID 10, URE risk is actually lower in RAID 6 in many cases especially in the enterprise.

Last time I spoke to EMC about UREs they explained to me that the Storage Groups do not work like conventional RAID and will only have to rebuild the data, meaning on average many less bits to read mitigating URE risk.

EMC generally recommend RAID 5 in 4+1 and RAID 6 in 6+2 in my experience.

These “RAIDs” get added to the Storage Group which then decides where to place the data rather than just striping it as RAID 50/60 would.

I don’t think its possible to have a 48 disk RAID 6 on an EMC device.

Perhaps Brian can offer some advise on this @bhenduemc ?

Responding to my own statement as I know people are going to call me on this…

Currently there are no 15k 900gb drives supported by the VNXe. ( https://store.emc.com/us/Product-Family/VNX-AND-VNXe-PRODUCTS/VNXe3150-Disk-Expansion/p/VNE-3150-DISK-002-1Q13-0002 )

10,000 RPM = 166.6 RP (second)

1 revolution every 6ms, divide this by 2 as on average you will have to wait for half a rotation.

3ms Latency (rotational delay)

Now for the seek time, this is harder as EMC don’t always tell you what drives they are using so lets take a look at ( High-Performance SSDs, HDDs, USB Drives, & Memory Cards | Western Digital )

This is a 900GB 10k drive, with a seek time between 3.8-4.2ms

Seek time is the time it takes the head to move to the correct track on the disk.

So in this example of 10k media you have an average seek+latency time of 6.8-7.2ms.

“talking to EMC rep”

May as well be talking to a cat

Sales reps at the best of times arent the people you should be talking to - especially not those trying to sell SAN kit at massive markup

2 Spice ups

@tobywells

So just to clarify I didn’t say I spoke to my “EMC rep” about this.

If you wish to understand storage pools in regards to what I was referencing you should read this ( https://vjswami.com/2011/03/05/emc-storage-pool-deep-dive-design-considerations-caveats/ )

In some ways the storage pool design is quite similar to the 3PAR chunklets but with less mobility and more ring fenced.

I never said you did…

I was replying to the OP who said "I was just talking to my EMC rep and he was suggesting"

When I reply to a post I reply and quote like I am now;-)

1 Spice up

Hi Mike! Looks like you’ve received some helpful feedback from your fellow SpiceHeads. Just wanted to let you know that if you ever have any questions, feel free to drop me or @emilyforemc a line! We’re always happy to help :slight_smile:

1 Spice up

lol, my bad. Sorry!

You don’t lose the entire array when you hit a URE, you lose that stripe. Which in the case of 48 x 900GB disks is going to be quite a small amount of data. (Obviously it could be a very important small amount of data…)

ive been talking this over with the team here and we are undecided if we are going to try a RAID5 vs the RAID10 looking at my AVG IOPs i should fit within a RAID5 no problem. but when i have my random bursts through the day i am over the arrays max IOPs of 48x125=6000IOPs

1250 avg = 3125IOPs with a 50% read and write

3000 Bursts = 7500IOPs again with 50% read and write

I guess it would help everyone to understand the driver for change?

Have you figured out what is causing the io bursts? Often this is backups and a slightly longer window makes little difference…