Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

News

8/10/2009
09:12 AM
George Crump
George Crump
Commentary
50%
50%

Maximizing IOPS With SSD

In a recent series of entries I covered several storage technologies that can help a data center maximize their CAPEX. Most of that series focused on cutting costs by using less primary storage either through archiving or efficiency. Another way to maximize your CAPEX investment is to maximize IOPS with SSD (Solid State Disk) technology.

In a recent series of entries I covered several storage technologies that can help a data center maximize their CAPEX. Most of that series focused on cutting costs by using less primary storage either through archiving or efficiency. Another way to maximize your CAPEX investment is to maximize IOPS with SSD (Solid State Disk) technology.How can SSD, one of the most expensive forms of storage maximize your CAPEX? They can increase the amount of IOPS you can deliver per GB of storage. This leads to less space and power consumed per IO.

As we discussed in our white paper "Visualizing SSD Readiness" many applications today can deliver enough IO requests to the storage system that they can support drive array groups with very large spindle counts. The larger the number of mechanical spindles in an array the better it typically will perform. The problem with higher drive count is, well, higher drive count. All of these drives need to be ordered, shelved, racked, managed and powered. All of that costs money. Your application may be able to sustain 300 drives, but can your budget?

There is also a capacity utilization issue as well. While you have an application that can support 300 mechanical drives, at today's drive capacities that can lead to TB's of unused space. Its true that with a SAN you can put other applications on those spindles as well but doing so could effect the performance of the key application and the other applications more than likely have no need for the high performance storage you had to buy for the original application.

What makes more sense is targeting the capacity where its needed, an almost application specific storage tier. In block IO this is typically done by adding SSD to an existing storage system or taking an even more targeted approach and utilizing SSD specific systems from companies like Texas Memory or Violin Memory. In fact Fusion-io and Texas Memory can take this application specificity even further by leveraging one of their PCI-E based cards and put the IOPS right in the application server itself.

One of the challenges is deciding how and when to get the right data sets on SSD. Often this is a specific application or data set like a hot Oracle table and moving that table to an SSD is a straightforward process. For more broad use of SSD, especially in NAS storage, the challenge becomes greater. One solution is to use a storage virtualization appliance from companies like NetApp or DataCore to unify the management of the SSD.

In NAS IO a viable alternative is beginning to be presented by companies like Storspeed. These companies are creating an evolved form of cache technologies to intelligently move data in and out of tiered storage. This allows SSD to be used to its maximum capacity and yet always have the most active data set on that high speed tier. As well as to give the user specifc control over what applications or data leverage that tier.

Regardless of the approach, leveraging SSD, especially with its increasing affordability, is an excellent way to reduce CAPEX expenditures and one that can have an immediate payoff.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
2020: The Year in Security
Download this Tech Digest for a look at the biggest security stories that - so far - have shaped a very strange and stressful year.
Flash Poll
Assessing Cybersecurity Risk in Today's Enterprises
Assessing Cybersecurity Risk in Today's Enterprises
COVID-19 has created a new IT paradigm in the enterprise -- and a new level of cybersecurity risk. This report offers a look at how enterprises are assessing and managing cyber-risk under the new normal.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-12512
PUBLISHED: 2021-01-22
Pepperl+Fuchs Comtrol IO-Link Master in Version 1.5.48 and below is prone to an authenticated reflected POST Cross-Site Scripting
CVE-2020-12513
PUBLISHED: 2021-01-22
Pepperl+Fuchs Comtrol IO-Link Master in Version 1.5.48 and below is prone to an authenticated blind OS Command Injection.
CVE-2020-12514
PUBLISHED: 2021-01-22
Pepperl+Fuchs Comtrol IO-Link Master in Version 1.5.48 and below is prone to a NULL Pointer Dereference that leads to a DoS in discoveryd
CVE-2020-12525
PUBLISHED: 2021-01-22
M&M Software fdtCONTAINER Component in versions below 3.5.20304.x and between 3.6 and 3.6.20304.x is vulnerable to deserialization of untrusted data in its project storage.
CVE-2020-12511
PUBLISHED: 2021-01-22
Pepperl+Fuchs Comtrol IO-Link Master in Version 1.5.48 and below is prone to a Cross-Site Request Forgery (CSRF) in the web interface.