News

10/7/2009
02:50 PM
George Crump
George Crump
Commentary
50%
50%

Understanding Storage Performance

For most storage managers improving storage performance is an endless loop of upgrades that are taken until the problem goes away. Understanding where to look and how to configure the environment is often a series of "best guesses" instead of a thorough understanding of it. In today's economy best guesses are not allowed. Making the right move, the first time, is critical.

For most storage managers improving storage performance is an endless loop of upgrades that are taken until the problem goes away. Understanding where to look and how to configure the environment is often a series of "best guesses" instead of a thorough understanding of it. In today's economy best guesses are not allowed. Making the right move, the first time, is critical.The first step, as always, is to understand the nature of the problem. Do you really have a storage performance issue or do you have a bad application? Nearly everything you do to the storage infrastructure to improve performance is going to cost money and in some cases a lot of money. Before you spend that money you want to make sure you will see a difference. Unfortunately confirming that an application really could take advantage of more storage performance is sometimes hard to prove. You can start with some of the steps we discuss in our "Visual SSD Readiness" guide where we talk about using utilities like PerfMon to determine if the application is building up enough storage I/O requests to validate a higher drive count or if the performance issue is more of a response time issue and needs to be addressed by adding faster drives or solid state disk technology.

Beyond these basic system utilities there are tools available from companies like Confio and Tek-Tools that can analyze storage performance from the application itself, through the server, through the network and on to the storage. Creating a complete picture has great value when trying to prove that the application is truly in need of greater storage performance or if better programming can be applied.

Once it is determined that the application could take advantage of improved storage performance there are several areas to look at; wider storage bandwidth, faster storage controllers and of course faster and higher speed drives. While there is no correct order to follow in bandwidth upgrades, in most cases the first attempt is to add more and higher speed drives. This can have unpredictable results. Instead performance has to be examined holistically.

In most cases storage bandwidth is not the problem, most SANs today are at 4GB and will slowly be migrating to 8GB or 10GB FCoE over the next few years. As we discuss in "What's Causing the Storage IO Bottleneck?" the storage controllers themselves can also be the performance bottleneck either through too much processor load managing the array or through bandwidth into and out of the controller head. Finally of course there are the drive mechanics themselves. As mentioned earlier, based on queue depth or response time, adding more or faster drives can solve the problem.

Over the next few entries we will take a deeper dive into each of these issues. For now know that understanding storage performance is more than just throwing drives at the problem and that evaluating the whole storage infrastructure is required to address storage performance without breaking the budget.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
'Hidden Tunnels' Help Hackers Launch Financial Services Attacks
Kelly Sheridan, Staff Editor, Dark Reading,  6/20/2018
Inside a SamSam Ransomware Attack
Ajit Sancheti, CEO and Co-Founder, Preempt,  6/20/2018
Tesla Employee Steals, Sabotages Company Data
Jai Vijayan, Freelance writer,  6/19/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-12716
PUBLISHED: 2018-06-25
The API service on Google Home and Chromecast devices before mid-July 2018 does not prevent DNS rebinding attacks from reading the scan_results JSON data, which allows remote attackers to determine the physical location of most web browsers by leveraging the presence of one of these devices on its l...
CVE-2018-12705
PUBLISHED: 2018-06-24
DIGISOL DG-BR4000NG devices have XSS via the SSID (it is validated only on the client side).
CVE-2018-12706
PUBLISHED: 2018-06-24
DIGISOL DG-BR4000NG devices have a Buffer Overflow via a long Authorization HTTP header.
CVE-2018-12714
PUBLISHED: 2018-06-24
An issue was discovered in the Linux kernel through 4.17.2. The filter parsing in kernel/trace/trace_events_filter.c could be called with no filter, which is an N=0 case when it expected at least one line to have been read, thus making the N-1 index invalid. This allows attackers to cause a denial o...
CVE-2018-12713
PUBLISHED: 2018-06-24
GIMP through 2.10.2 makes g_get_tmp_dir calls to establish temporary filenames, which may result in a filename that already exists, as demonstrated by the gimp_write_and_read_file function in app/tests/test-xcf.c. This might be leveraged by attackers to overwrite files or read file content that was ...