Hi Guys,
Was just configuring a Dell server to purchase and I’m given the choice of SATA SSD or SAS SSD, what’s the difference, just the interface?
I was also looking at PCIe for SSD but Dell states it requires 2 processors, im confused? Dell site is not very friendly. I was looking the Perc 710 controller but have no idea of max drives I can configure in an array, any idea?
thanks.
@Dell_Technologies
13 Spice ups
michaelsc
(Michael.SC)
3
OEM sites are built to upsell, not be logical. It would be better to get the minimum you require right now and then buy the parts separately. I always endorse whitebox solutions since you have total control over the quality. You may not get the support but I imagine most people would come out ahead in savings.
As far as SATA vs SAS SSD: It depends on the controller and the drive. SATA III, which most recent SSD’s support allows up to 600MB/s so it’s preferable for higher-end drives. The controllers are also cheaper on both ends so the overall price tends to be cheaper. SAS drives are typically built for 24/7 environments so may have better reliability but now much. They also have additional failover options. Currently SAS has a max of 600MB/s as well so the performance should be about the same. SAS does support full-duplex connections which allow it to communicate simultaneously in both directions but very devices take advantage of this from my understanding. There are also some other considerations.
The discussion is endless but I would say that SATA will be fine for a majority of people for SSD’s currently. There’s benefits to both as well as disadvantages.
4 Spice ups
webby
(webby)
4
Its a tricky call, all things being equal in terms of interface speed it really depends on your specific needs as to which interface you should pick. A couple of general factors though:
- As Michael C has said, SAS has always been the realm of Enterprise whereas SATA is a relative newcomer to the Enterprise scene. Theoretically that should mean SAS controllers are more durable but really that’s just speculation, they’re both susceptible to the same electronic failures as each other.
- SAS controllers (on the PC/Server side) are compatible with SATA but not vice-versa. If you are also going to be deploying spinning disks on this server or may in the future its worth keeping in mind as SAS HDs greatly outstrip SATA drives in that market.
Ultimately if you need to shave off a few $$ SATA is a perfectly good option.
As for the PERC 710 I can tell you it supports at least 16 disks in RAID 10, since that’s what we have. You should be able to get more specifics from one of the Dell reps
@ivan-dell
3 Spice ups
storageio
(greg schulz)
5
@Keg If you are on a tight budget, then go with the 6Gb SATA drives, however check their SPECs vs. the SAS drives to see if there is much if any differences.
Sometimes the SATA drive will be less expensive and will be an different drive than what is available for SATA.
In some other cases the exact same drive will be offered with different interfaces, for example 6Gb SATA (e.g. SATA III), 6Gbs SAS or in some cases even 12Gb SAS today current max performance.
However not all drives may be fast enough to leverage the 12Gb SAS, likewise simply putting a faster 6Gb SAS or SATA interface does not mean the device will be faster either.
Some vendors will charge more for the 6Gb SAS version of a drive than the same drive with a SATA interface even if the drive itself is the same, thus look for a relative small price increase. However some vendors may try to play games with extreme markups of a SAS drive so do your homework.
Do you have the model numbers of the SAS and SATA SSDs? By looking at the model numbers might be able to tell if they are different actual drives, or just the same drive with different interfaces.
As for the processors, the PCIe based SSD cards need at least one and in some cases two PCIe slots, also pay attention to what type of slot (e.g. x1 x4 x8 x16) as well as what is available in the server. Also some PCIe SSD cards do place a workload on the servers CPU (where its driver and other work may be done) so you may encounter some other requirements such as a faster processor or second processor.
Now if you have the need for speed and budget yet cannot afford, or have support for the PCIe SSD, then look at the 12Gb SAS SSDs attached to a 12Gbs SAS HBA that is attached to a PCIe G3 port. Have used that combination for various workloads and it is very fast, however the caveat is what type of enclosure and how cabled.
3 Spice ups
Hello
Yes, it is just a different interface. If all of the specs are the same then the only difference will be the interface. SAS is a more feature rich interface that provides better error reporting and path redundancy.
The PCIe SSD’s use a switch controller that bypasses the RAID controller. They are routed across the PCIe bus. The PCIe bus is controlled by the processor. If you do not have a second CPU then the bus that the PCIe SSDs are routed across will not be operational.
The PERC H710 supports a maximum of 32 physical drives. All 32 of those drives can be in a single array. There are limitations on spanned arrays like RAID 10. The maximum number of spans per array is 8, so if the span length on the RAID 10 is the minimum of 2 then you could have a maximum of 16 drives in the array. If you want a larger array then you would have to increase the span length.
I hope that clears it up,
Thanks
5 Spice ups
Hi Daniel,
Many thanks. Im used to working with SANs and not local storage with raid controllers - what is the meaning of spans per array?
2 Spice ups
daniel-dell
(Daniel (Dell))
8
That is a complex question that requires a lot of information to explain. I will provide a basic understanding of what array spanning is. If you need more information then I would advise to search online.
RAID 10, 50, 60, etc are spanned arrays. Some also refer to them as hybrid arrays. I will give two examples of what exactly the span is.
The first example is a RAID 10:
- RAID 1 consists of 2 disks
- A RAID 10 is formed by connecting two or more RAID 1 spans together with a RAID 0
- The span length is the size of the RAID 1 span
With that being said, the minimum span length is 2 because RAID 1 is 2 drives. Also, because RAID 1 is 2 drives the span length must be increased in multiples of 2. If you create a span length of 8 for a raid 10 then the minimum number of drives in the array would be 16 because you need a minimum of 2 spans. If the controller supported array expansion then you would have to add additional drives in groups of 8(16,24,32,40,etc).
I’m going to explain it again as RAID 50 so that you can compare and understand the concepts:
- RAID 5 consists of 3 or more disks
- A RAID 50 is formed by connecting two or more RAID 5 spans together with a RAID 0
- The span length is the size of the RAID 5 span
With that being said, the minimum span length is 3 because RAID 5 requires at least 3 drives. Because RAID 5 can be incremented by 1(n+1) the span length can also be incremented by 1, so you could have a span of 3, 4, 5, etc. If you create a span length of 7 then the minimum number of drives in the array would be 14 because you need a minimum of 2 spans. If the controller supported array expansion then you would have to add additional drives in groups of 7(14,21,28,35,etc).
I hope that explaining it with two different spanned array types allows it to be better understood.
Thanks
4 Spice ups
veet
(Veet)
9
Depending on your Environment, there may not be any real big advantage of SAS over SATA, in terms of performance…In terms of reliability - the main advantage SAS has over SATA is the ability to dual-port.
1 Spice up
Check for “read optimized” Vs. “write optimized” that’s main factor (write-optimized are more expensive as they are insanely over provisioned you’ll probably see 2TB of a raw flash with 800GB reported unit), back to performance PCIe → SAS → SATA. If you EOL working units before they actually die or lose much of performance SATA is fine.
1 Spice up
Got it, understood! Thanks guys for your help. Scott’s write up was very informative. Have a question around the server but not relevant to this post. I’ll post as a new topic.
Thanks again.
The other thing is if its going with external JBODs. SAS is duel path’d and when some idiot takes out one of your SAS cables and you have 90TB go offline, you’ll wish you had used 2 port SAS… (Real problem last month a friend had, as all other drives where NL-SAS and was doing cachecade with SATA drives).
1 Spice up
storageio
(greg schulz)
14
@John773 SAS can be dual-ported for both internal/external including with JBOD, however SAS does not have to be dual-ported. In the case of the OP it does not sound like 90TB going off-line is a primary concern, particular if considering SATA ;)…