News
10/7/2009
02:50 PM
George Crump
George Crump
Commentary
Connect Directly
RSS
E-Mail
50%
50%

Understanding Storage Performance

For most storage managers improving storage performance is an endless loop of upgrades that are taken until the problem goes away. Understanding where to look and how to configure the environment is often a series of "best guesses" instead of a thorough understanding of it. In today's economy best guesses are not allowed. Making the right move, the first time, is critical.

For most storage managers improving storage performance is an endless loop of upgrades that are taken until the problem goes away. Understanding where to look and how to configure the environment is often a series of "best guesses" instead of a thorough understanding of it. In today's economy best guesses are not allowed. Making the right move, the first time, is critical.The first step, as always, is to understand the nature of the problem. Do you really have a storage performance issue or do you have a bad application? Nearly everything you do to the storage infrastructure to improve performance is going to cost money and in some cases a lot of money. Before you spend that money you want to make sure you will see a difference. Unfortunately confirming that an application really could take advantage of more storage performance is sometimes hard to prove. You can start with some of the steps we discuss in our "Visual SSD Readiness" guide where we talk about using utilities like PerfMon to determine if the application is building up enough storage I/O requests to validate a higher drive count or if the performance issue is more of a response time issue and needs to be addressed by adding faster drives or solid state disk technology.

Beyond these basic system utilities there are tools available from companies like Confio and Tek-Tools that can analyze storage performance from the application itself, through the server, through the network and on to the storage. Creating a complete picture has great value when trying to prove that the application is truly in need of greater storage performance or if better programming can be applied.

Once it is determined that the application could take advantage of improved storage performance there are several areas to look at; wider storage bandwidth, faster storage controllers and of course faster and higher speed drives. While there is no correct order to follow in bandwidth upgrades, in most cases the first attempt is to add more and higher speed drives. This can have unpredictable results. Instead performance has to be examined holistically.

In most cases storage bandwidth is not the problem, most SANs today are at 4GB and will slowly be migrating to 8GB or 10GB FCoE over the next few years. As we discuss in "What's Causing the Storage IO Bottleneck?" the storage controllers themselves can also be the performance bottleneck either through too much processor load managing the array or through bandwidth into and out of the controller head. Finally of course there are the drive mechanics themselves. As mentioned earlier, based on queue depth or response time, adding more or faster drives can solve the problem.

Over the next few entries we will take a deeper dive into each of these issues. For now know that understanding storage performance is more than just throwing drives at the problem and that evaluating the whole storage infrastructure is required to address storage performance without breaking the budget.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Comment  | 
Print  | 
More Insights
Register for Dark Reading Newsletters
White Papers
Flash Poll
Current Issue
Cartoon
DevOps’ Impact on Application Security
DevOps’ Impact on Application Security
Managing the interdependency between software and infrastructure is a thorny challenge. Often, it’s a “developers are from Mars, systems engineers are from Venus” situation.
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2014-0103
Published: 2014-07-29
WebAccess in Zarafa before 7.1.10 and WebApp before 1.6 stores credentials in cleartext, which allows local Apache users to obtain sensitive information by reading the PHP session files.

CVE-2014-0475
Published: 2014-07-29
Multiple directory traversal vulnerabilities in GNU C Library (aka glibc or libc6) before 2.20 allow context-dependent attackers to bypass ForceCommand restrictions and possibly have other unspecified impact via a .. (dot dot) in a (1) LC_*, (2) LANG, or other locale environment variable.

CVE-2014-2226
Published: 2014-07-29
Ubiquiti UniFi Controller before 3.2.1 logs the administrative password hash in syslog messages, which allows man-in-the-middle attackers to obtains sensitive information via unspecified vectors.

CVE-2014-3541
Published: 2014-07-29
The Repositories component in Moodle through 2.3.11, 2.4.x before 2.4.11, 2.5.x before 2.5.7, 2.6.x before 2.6.4, and 2.7.x before 2.7.1 allows remote attackers to conduct PHP object injection attacks and execute arbitrary code via serialized data associated with an add-on.

CVE-2014-3542
Published: 2014-07-29
mod/lti/service.php in Moodle through 2.3.11, 2.4.x before 2.4.11, 2.5.x before 2.5.7, 2.6.x before 2.6.4, and 2.7.x before 2.7.1 allows remote attackers to read arbitrary files via an XML external entity declaration in conjunction with an entity reference, related to an XML External Entity (XXE) is...

Best of the Web
Dark Reading Radio