News
3/30/2010
11:20 AM
George Crump
George Crump
Commentary
Connect Directly
RSS
E-Mail
50%
50%

Get To Know The Storage I/O Chain

Storage performance problems are often circular challenges. You fix one bottleneck and you expose another one. You can't really fix storage I/O, all you can do is get it to the point that people stop blaming storage for the performance problems in the data center. Getting there requires knowing the storage I/O chain.

Storage performance problems are often circular challenges. You fix one bottleneck and you expose another one. You can't really fix storage I/O, all you can do is get it to the point that people stop blaming storage for the performance problems in the data center. Getting there requires knowing the storage I/O chain.This is the sequence of components that start at the application and work their way down to the physical storage device. And this is a challenge automated tiering systems (ATS) face. These are solutions provided by vendors to attempt to solve storage I/O performance problems. They typically will move data based on the access frequency of that data. The more often the data is accessed the faster tier of storage the data is placed on, eventually landing on solid state disk (SSD).

The less accessed that data is the slower tier of storage that data is placed on, eventually landing on SATA based high capacity hard drives. There is little doubt that ATS will play an important role in the evolution of data center storage and the optimization of that resource. It is however just one component of the storage strategy, especially when it comes to performance.

Each component in the I/O chain needs to be measured and monitored to see if it can justify the investment that ATS and/or SSD are able to give it. Can the application generate enough simultaneous requests? Can the server process all those requests and get them on the NIC fast enough? Can that data travel across the connecting framework, through the switches maintain performance until it reaches the controllers in the storage system? Any break along this chain may obviate the value of ATS.

Ideally you want to upgrade just the right components to just the right level of performance to fix those issues. Determining what components should be upgraded and to what level requires tools to make those decisions. Interestingly most performance upgrades in data centers are more of a "cross your fingers and hope this fixes the problem" type of solution. Just throwing hardware at the problem leads to massive under-utilization and wasted resources.

Tools are needed that can monitor storage I/O performance from the application through the server (virtual or physical), through the HBA card, through the storage infrastructure and on to the storage system. This may even require physical tapping of the environment to get the exact performance benchmarks you need to make those decisions. While investments in these sorts of tools means an investment of precious IT budget dollars, when done as a first step it can avoid unnecessary upgrades and make sure that those upgrades you do implement will perform exactly as expected.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Cartoon
Current Issue
Dark Reading Must Reads - September 25, 2014
Dark Reading's new Must Reads is a compendium of our best recent coverage of identity and access management. Learn about access control in the age of HTML5, how to improve authentication, why Active Directory is dead, and more.
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2012-5485
Published: 2014-09-30
registerConfiglet.py in Plone before 4.2.3 and 4.3 before beta 1 allows remote attackers to execute Python code via unspecified vectors, related to the admin interface.

CVE-2012-5486
Published: 2014-09-30
ZPublisher.HTTPRequest._scrubHeader in Zope 2 before 2.13.19, as used in Plone before 4.3 beta 1, allows remote attackers to inject arbitrary HTTP headers via a linefeed (LF) character.

CVE-2012-5487
Published: 2014-09-30
The sandbox whitelisting function (allowmodule.py) in Plone before 4.2.3 and 4.3 before beta 1 allows remote authenticated users with certain privileges to bypass the Python sandbox restriction and execute arbitrary Python code via vectors related to importing.

CVE-2012-5488
Published: 2014-09-30
python_scripts.py in Plone before 4.2.3 and 4.3 before beta 1 allows remote attackers to execute Python code via a crafted URL, related to createObject.

CVE-2012-5489
Published: 2014-09-30
The App.Undo.UndoSupport.get_request_var_or_attr function in Zope before 2.12.21 and 3.13.x before 2.13.11, as used in Plone before 4.2.3 and 4.3 before beta 1, allows remote authenticated users to gain access to restricted attributes via unspecified vectors.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
In our next Dark Reading Radio broadcast, we’ll take a close look at some of the latest research and practices in application security.