Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


08:45 AM
George Crump
George Crump

Optimize Cloud Storage, Flash Storage And Deduplication

In our last entry we discussed the growing importance of efficiency. Tools and better storage systems can help make IT Administrators more efficient. The other option is to keep throwing new technology at the problem. Cloud Storage, Flash Storage and Deduplication are great examples.

In our last entry we discussed the growing importance of efficiency. Tools and better storage systems can help make IT Administrators more efficient. The other option is to keep throwing new technology at the problem. Cloud Storage, Flash Storage and Deduplication are great examples.While all these technologies can claim efficiency on the CapEX side of the equation, without proper tools and procedures they can't claim much from a OpEX standpoint. Cloud Storage can significantly reduce internal storage management costs, and as we discuss in our recent article on The Evolution of Data Archiving, for some customers it is an ideal target for archive storage. Without the tools to identify and move data to that archive the amount of work required to manually guess what data can be archived is too time consuming.

Storage systems play an enormous role in increasing storage efficiencies. Companies like 3PAR, HDS, NetApp and DataCore that provide virtualization removes the need to plan and manage LUNS. Storage is grouped into a pool and used as needed by the servers attaching to the storage. Thin provisioning helps with the CapEx of not having to allocate storage and not having to spend time planning exact LUN sizes. Multi-protocol provides flexibility to connect servers to storage by whatever means are appropriate and affordable.

Cloud storage providers need this operational flexibility to meet the unpredictable growth requirements that they may face. Users of cloud storage need equal flexability from their storage solutions to be able to migrate data from their primary systems to secondary systems. Tools like those from Tek-Tools, APTARE and others can focus on capacity management that will identify data that is valid to be migrated to the cloud.

Performance centric tools like those from Virtual Instruments and NetApp's SANscreen determine when you are ready for flash storage and which workload should be moved to flash storage. Also with continuous monitoring they can provide information when you move a workload OFF of SSD when the performance demand no longer justifies it. In SSD, because of the cost delta, unused capacity is unwelcomed as are applications that can't take advantage of the performance of the device. Monitoring allows you to keep the SSD full of application data that can take advantage of it.

The same goes for deduplication; it should not be applied universally, especially on primary storage. It should be applied where there is enough redundant data to justify any performance impact that may occur. An exception here is real time data compression like those offered by Storwize. It can be applied once, universally with no performance impact decreasing CapEX while not adversely effecting OpEX.

The net here as stated in my last post is staff efficiency through tools and systems has to come first or in parallel to any CapEX reducing initiatives but also that both initiatives are critical for 2009.

Track us on Twitter: http://twitter.com/storageswiss.

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
7 Old IT Things Every New InfoSec Pro Should Know
Joan Goodchild, Staff Editor,  4/20/2021
Cloud-Native Businesses Struggle With Security
Robert Lemos, Contributing Writer,  5/6/2021
Defending Against Web Scraping Attacks
Rob Simon, Principal Security Consultant at TrustedSec,  5/7/2021
Register for Dark Reading Newsletters
White Papers
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-05-15
A XSS Vulnerability in /uploads/dede/action_search.php in DedeCMS V5.7 SP2 allows an authenticated user to execute remote arbitrary code via the keyword parameter.
PUBLISHED: 2021-05-15
DedeCMS V5.7 SP2 contains a CSRF vulnerability that allows a remote attacker to send a malicious request to to the web manager allowing remote code execution.
PUBLISHED: 2021-05-14
The Linux kernel before 5.11.14 has a use-after-free in cipso_v4_genopt in net/ipv4/cipso_ipv4.c because the CIPSO and CALIPSO refcounting for the DOI definitions is mishandled, aka CID-ad5d07f4a9cd. This leads to writing an arbitrary value.
PUBLISHED: 2021-05-14
In the Linux kernel before 5.12.4, net/bluetooth/hci_event.c has a use-after-free when destroying an hci_chan, aka CID-5c4c8c954409. This leads to writing an arbitrary value.
PUBLISHED: 2021-05-14
The block subsystem in the Linux kernel before 5.2 has a use-after-free that can lead to arbitrary code execution in the kernel context and privilege escalation, aka CID-c3e2219216c9. This is related to blk_mq_free_rqs and blk_cleanup_queue.