Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

News

7/9/2009
10:44 AM
George Crump
George Crump
Commentary
50%
50%

Where To Start With SSD

Solid State Disk is a mature, stable technology poised for widespread adoption in enterprises of all sizes. It solves performance and power issues that mechanical drives can not. Most data center managers, large and small, have an eye on this technology but are not exactly sure where to start with SSD.

Solid State Disk is a mature, stable technology poised for widespread adoption in enterprises of all sizes. It solves performance and power issues that mechanical drives can not. Most data center managers, large and small, have an eye on this technology but are not exactly sure where to start with SSD.As we explained in our Visualizing SSD Guide, determining what applications can benefit from the performance boost of SSD is relatively straight forward with today's tools but its not just performance, it can also be applications that are storage overwhelmed. This means that they by comparison to today's storage capacities are not big consumers of space but often these are some of the more critical applications.

The data that these capacity overwhelmed applications create and manage can't be lost and as a result they are stored on either Mirrored or RAID based storage systems. These applications, especially in the smaller data center, are an ideal candidate for movement to an SSD tier even if they can't fully justify the performance increase. In the smaller data center these arrays are often locally attached, meaning that much of the RAID capacity goes to waste.

In these situations its ideal to plug in PCI-E based SSD cards like those from Texas Memory Systems or Fusion-io. These cost effective cards, while more expensive than an array, offer reliability, effective capacity utilization and of course a huge increase in performance. Probably most importantly these solutions are an easy first step. No SAN to install, no cabling, just plug it in, move your data over to it and you are off to the races.

In the larger enterprise, you're likely to consider an SSD that is SAN attached and sharable. The workloads that can take advantage of the performance demands in the larger enterprise are almost always there. Now the challenge of the complexity involved in dissecting what parts of an application should be place on SSD are being addressed.

Deciding what data goes on SSD is being handled through sheer size of the SSD, 2TB plus systems are very affordable. This solves the data placement issue by simply putting all the application's data and maybe even the application itself on the SSD. A second method that is gaining traction is added intelligence in the array itself by companies like Compellent or Storspeed. Compellent can migrate data as it becomes active to and from the SSD tier within its own storage system. Storspeed takes this a step further by working with any file based networked storage array.

As we discussed in our Information Week Video Series on SSD, in many environments SSDs are cost justifiable right now and enterprises of all sizes should do more than look at this technology, they should deploy it. This is one of those cases where seeing is believing.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Why Cyber-Risk Is a C-Suite Issue
Marc Wilczek, Digital Strategist & CIO Advisor,  11/12/2019
DevSecOps: The Answer to the Cloud Security Skills Gap
Lamont Orange, Chief Information Security Officer at Netskope,  11/15/2019
Unreasonable Security Best Practices vs. Good Risk Management
Jack Freund, Director, Risk Science at RiskLens,  11/13/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Navigating the Deluge of Security Data
In this Tech Digest, Dark Reading shares the experiences of some top security practitioners as they navigate volumes of security data. We examine some examples of how enterprises can cull this data to find the clues they need.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industrys conventional wisdom. Heres a look at what theyre thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-19040
PUBLISHED: 2019-11-17
KairosDB through 1.2.2 has XSS in view.html because of showErrorMessage in js/graph.js, as demonstrated by view.html?q= with a '"sampling":{"value":"<script>' substring.
CVE-2019-19041
PUBLISHED: 2019-11-17
An issue was discovered in Xorux Lpar2RRD 6.11 and Stor2RRD 2.61, as distributed in Xorux 2.41. They do not correctly verify the integrity of an upgrade package before processing it. As a result, official upgrade packages can be modified to inject an arbitrary Bash script that will be executed by th...
CVE-2019-19012
PUBLISHED: 2019-11-17
An integer overflow in the search_in_range function in regexec.c in Oniguruma 6.x before 6.9.4_rc2 leads to an out-of-bounds read, in which the offset of this read is under the control of an attacker. (This only affects the 32-bit compiled version). Remote attackers can cause a denial-of-service or ...
CVE-2019-19022
PUBLISHED: 2019-11-17
iTerm2 through 3.3.6 has potentially insufficient documentation about the presence of search history in com.googlecode.iterm2.plist, which might allow remote attackers to obtain sensitive information, as demonstrated by searching for the NoSyncSearchHistory string in .plist files within public Git r...
CVE-2019-19035
PUBLISHED: 2019-11-17
jhead 3.03 is affected by: heap-based buffer over-read. The impact is: Denial of service. The component is: ReadJpegSections and process_SOFn in jpgfile.c. The attack vector is: Open a specially crafted JPEG file.