Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

News

2/22/2011
12:25 PM
George Crump
George Crump
Commentary
50%
50%

Solving Scale Out Storage's Dark Side

In a recent entry we discussed a concern with scale out storage, making sure the utilization of processing, power and resources remains efficient. The last thing you want is a storage system that, while it can scale to limitless capacity, also requires limitless power and data center floor space? The good news is that some vendors are aware of these concerns and have some solutions for you to consider.

In a recent entry we discussed a concern with scale out storage, making sure the utilization of processing, power and resources remains efficient. The last thing you want is a storage system that, while it can scale to limitless capacity, also requires limitless power and data center floor space? The good news is that some vendors are aware of these concerns and have some solutions for you to consider.Scale out storage gets its name because typically as you add another node to the cluster, processing performance and capacity scale in unison. The advantage being that you don't need to buy all the horsepower upfront in expectation of future storage I/O or capacity needs.The problem is that as the node count continues to grow one of those variables often gets out of balance. Most environments either a heavier need for capacity or a heavier need for performance, not both equally. Typically as the node count increases the average utilization of the processing power of the cluster declines. The problem will continue to get worse as these nodes, which are typically Intel servers with internal storage, increase the processing performance capabilities though standard technology upgrades. The result is an increasing amount of these systems resources may go unused.

For many storage managers though the trade-off of efficiency is worth the gain in cost effective scalability. After all processing performance is relatively cheap today and power is a budget-able item that many can choose to live with. However, when scale out storage systems grow in size or as they try to appeal to small to mid-range data centers, suppliers need to address the challenges of poor resource utilization. The goal should be to increase the efficiency of each node in terms of processing resources, power and foot print.

There are ways to make scale out storage systems more resource, space and power efficient. First, if the scale out storage system is going to be focused on a nonperformance sensitive environments like most file sharing use cases then using a lower power processor might be a good alternative. For example we recently were briefed by a supplier that used Intel's ATOM processors. Using this processor increases the utilization per node while it decreases power utilization. Utilizing smart packaging will allow these cooler (from a temperature perspective) processors to run in tighter spaces which would lead to better floor space efficiency as well.

Another option is to, instead of making all the CPU resources of the physical server a node, subdivide that server into multiple nodes. For example we see increasing popularity to make each processor core a node as we discuss in a recent briefing report. That means for the same investment in server hardware you could end up with four or more nodes, all working much harder and being more efficient in their utilization. We also were recently briefed on another way to accomplish this which would to be able to run the clustered storage software as a virtual machine and allocate one virtual machine on each physical host to assemble your scale out storage. This would mean zero additional footprint.

A final option is to just make the expected workload of the node higher. Typically when you need more performance out of scale-out clustering the answer is to add more nodes so that more drives are available to respond to I/O requests. The individual nodes are not working harder, there are just more nodes to process the I/O. Mechanical storage inside a node will struggle with keeping that node's processor busy. There is too much latency. By comparison, zero latent technology like solid state drives can keep the node's processors very busy. The result is you have less nodes working harder, delivering the same or better I/O rates.

The other option is to provide better scaling within traditional non scale out (often called scale up) architectures. We will cover those options in an upcoming entry.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 10/23/2020
Russian Military Officers Unmasked, Indicted for High-Profile Cyberattack Campaigns
Kelly Jackson Higgins, Executive Editor at Dark Reading,  10/19/2020
Modern Day Insider Threat: Network Bugs That Are Stealing Your Data
David Pearson, Principal Threat Researcher,  10/21/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-27187
PUBLISHED: 2020-10-26
An issue was discovered in KDE Partition Manager 4.1.0 before 4.2.0. The kpmcore_externalcommand helper contains a logic flaw in which the service invoking D-Bus is not properly checked. An attacker on the local machine can replace /etc/fstab, and execute mount and other partitioning related command...
CVE-2020-7752
PUBLISHED: 2020-10-26
This affects the package systeminformation before 4.27.11. This package is vulnerable to Command Injection. The attacker can concatenate curl's parameters to overwrite Javascript files and then execute any OS commands.
CVE-2020-7127
PUBLISHED: 2020-10-26
A remote unauthenticated arbitrary code execution vulnerability was discovered in Aruba Airwave Software version(s): Prior to 1.3.2.
CVE-2020-7196
PUBLISHED: 2020-10-26
The HPE BlueData EPIC Software Platform version 4.0 and HPE Ezmeral Container Platform 5.0 use an insecure method of handling sensitive Kerberos passwords that is susceptible to unauthorized interception and/or retrieval. Specifically, they display the kdc_admin_password in the source file of the ur...
CVE-2020-7197
PUBLISHED: 2020-10-26
SSMC3.7.0.0 is vulnerable to remote authentication bypass. HPE StoreServ Management Console (SSMC) 3.7.0.0 is an off node multiarray manager web application and remains isolated from data on the managed arrays. HPE has provided an update to HPE StoreServ Management Console (SSMC) software 3.7.0.0* U...