2010 Storage Trends Scale Out Storage
This time of year I am always asked what storage trends will take off during the next year. I often resist because it is very hard to get it right. What I try to do is see what is likely to gain traction in the coming year. Over the next few entries we will explore some of the 2010 storage trends that you ought to be paying attention to. One of those is scale out storage.
This time of year I am always asked what storage trends will take off during the next year. I often resist because it is very hard to get it right. What I try to do is see what is likely to gain traction in the coming year. Over the next few entries we will explore some of the 2010 storage trends that you ought to be paying attention to. One of those is scale out storage.Scale out storage solutions have become a popular way to balance the need to find storage solutions that are cost effective and that can scale to meet future performance demands. Scale out solutions, also called clustered or grid storage systems, work by providing the users with the ability to independently scale storage processing, capacity and bandwidth.
Scale out storage can do this by using specialized technology like that of 3PAR or by using a standard Intel Architecture like EMC's VMAX, IBM's XIV and HP's LeftHand Networks. On the NAS side companies like Isilon Systems and Symantec have similar architectures. For backup and archive there are storage companies like Permabit, ExaGrid and Sepaton. What you'll notice from this list is that scale out storage, once the domain of start ups, now has the attention of major storage manufacturers.
The grid or clustering capabilities of these systems should allow all the processing and bandwidth capabilities of the storage systems to be used across the storage platform. This means that as you scale these systems you do not have a loose collection of independent storage systems that are managed by a central management interface. In a grid or clustered storage system you are dealing with one entity.
What makes these systems so appealing is their ability to buy just the capacity and performance that you need now, then add performance, capacity or bandwidth when you need it. This compares favorably to the traditional dual controller or single head environment. Once these systems reach their performance maximum they require the purchase of another system or an upgrade to a faster one. In most cases you are forced to over buy capacity and performance and hope that you don't outgrow the system too fast. In either case you may end up paying extra for capabilities that you won't need for a few more years. If you could have waited that few more years to purchase that capacity or performance it would have been less expensive.
This is not to say that the classic storage architecture is dead and should be abandoned. If a single enclosure system can provide you all the performance and capacity that you can ever foresee needing then these systems are fine and should be initially less expensive. The range that classic storage architectures can cover continues to increase as the processing power and capacity per box ride the technology wave.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.
About the Author
You May Also Like
Cybersecurity Day: How to Automate Security Analytics with AI and ML
Dec 17, 2024The Dirt on ROT Data
Dec 18, 2024