News
5/7/2009
01:59 PM
George Crump
George Crump
Commentary
50%
50%

DAS VS. SAN - Capacity And Performance Management

Capacity presents two challenges to the Storage Area Network (SAN) vs. Direct Attached Storage (DAS) debate. A traditional knock against DAS and a reason that many data centers get a SAN is because of these two capacity challenges. The first is can you get enough capacity and the second is can you use that capacity efficiently in a performance sensitive environment? DAS however now has the ability to address both of these issues.

Capacity presents two challenges to the Storage Area Network (SAN) vs. Direct Attached Storage (DAS) debate. A traditional knock against DAS and a reason that many data centers get a SAN is because of these two capacity challenges. The first is can you get enough capacity and the second is can you use that capacity efficiently in a performance sensitive environment? DAS however now has the ability to address both of these issues.Typically there are three types of storage needs in primary storage. The first is very high performance, relatively small capacity for specific applications; a database application or messaging system are good examples. The second is high capacity with modest performance for a mixed workload type of server such as a virtualization host. The third is very high capacity with relatively low performance, it does not need to be to the capacity level that a disk archive or backup tier would be but it does need reasonable capacity; a home directory server is a good example.

The first type, high performance, low capacity has been the hardest to configure efficiently. Performance is certainly there. I recently sat in on an LSI Corporation demonstration that showed 1 million+ IO's performance with 6Gb's Serial Attached SCSI on an Intel based server running Windows. This low cost performance is part of the reason for DAS's resurgence.

However the challenge as we touched on in our last entry is getting this performance while at the same time using the capacity efficiently. These systems will certainly be configured with multiple drives to increase performance and to protect against drive failure. The problem comes in using that space efficiently. While small drives can still be purchased, the price per GB is not as attractive as a larger capacity drive and as a result you end up wasting capacity because in a DAS environment this capacity can not be shared with other servers.

DAS's lack of efficiency is further exposed by the fact that shared storage systems from companies like 3PAR, Xiotech and DataCore are becoming increasingly efficient through the use of thin provisioning or automated provisioning. As we describe in our article Converting from Fat Volumes to Thin Provisioning, now these companies are making the conversion to thin provisioning efficient as well so old volumes can start out thin when you upgrade to new storage platforms.

In addition SAN based systems are now virtualizing the spindles behind the array so all of the available mechanisms can be used to get maximum performance from the array as we detailed in a blog last year entitled Wide Stripping. Obviously SSD systems can be used and shared in this environment as well for the ultimate in performance focused capacity.

SSD in the form of PCI-E cards like those from Texas Memory Systems and Fusion-io may also be an option for a performance vs. efficiency problem with DAS and may be a viable stop gap on the way to a SAN for many IT departments. This cards allows you to bring RAID like protection with SSD performance at a relatively low cost and in fact may be less expensive than a performance configured RAID set. Factor in the power efficiencies of SSD and the case becomes more compelling. If storage IO performance problems are plaguing just a few of your servers, it may be more cost effective to investigate PCI-E based SSD, especially if you don't have a SAN already.

You can reach new levels of performance with DAS today, the decision to be made is if its enough performance and can you either efficiently use the capacity of that storage or can you justify wasting the extra capacity, keeping in mind that power and cooling of those drives needs to be factored into the equation.

In our next entry we will look at how DAS can address the challenges of achieving large capacity storage in a single server.

Track us on Twitter: http://twitter.com/storageswiss.

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
8 Key Building Blocks for Enterprise Network Defense
Networks are changing rapidly -- and so are strategies for protecting them. This Tech Digest looks at the fundamentals for the next-gen environment.
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2013-7445
Published: 2015-10-15
The Direct Rendering Manager (DRM) subsystem in the Linux kernel through 4.x mishandles requests for Graphics Execution Manager (GEM) objects, which allows context-dependent attackers to cause a denial of service (memory consumption) via an application that processes graphics data, as demonstrated b...

CVE-2015-4948
Published: 2015-10-15
netstat in IBM AIX 5.3, 6.1, and 7.1 and VIOS 2.2.x, when a fibre channel adapter is used, allows local users to gain privileges via unspecified vectors.

CVE-2015-5660
Published: 2015-10-15
Cross-site request forgery (CSRF) vulnerability in eXtplorer before 2.1.8 allows remote attackers to hijack the authentication of arbitrary users for requests that execute PHP code.

CVE-2015-6003
Published: 2015-10-15
Directory traversal vulnerability in QNAP QTS before 4.1.4 build 0910 and 4.2.x before 4.2.0 RC2 build 0910, when AFP is enabled, allows remote attackers to read or write to arbitrary files by leveraging access to an OS X (1) user or (2) guest account.

CVE-2015-6333
Published: 2015-10-15
Cisco Application Policy Infrastructure Controller (APIC) 1.1j allows local users to gain privileges via vectors involving addition of an SSH key, aka Bug ID CSCuw46076.

Dark Reading Radio
Archived Dark Reading Radio
In this episode of Dark Reading Radio, veteran CISOs will share their experience and insight into how organizations can get the best bang for their security buck.