Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

News

6/25/2010
02:42 PM
George Crump
George Crump
Commentary
50%
50%

The Types Of SSD Cache

In our last entry we discussed the value of using solid state disk (SSD) as a cache, which provides a simpler on-ramp to the accelerated world of SSD. With SSD cache there are no or limited changes needed to applications and using SSD as a cache does not require a large capacity investment in the more premium priced technology.

In our last entry we discussed the value of using solid state disk (SSD) as a cache, which provides a simpler on-ramp to the accelerated world of SSD. With SSD cache there are no or limited changes needed to applications and using SSD as a cache does not require a large capacity investment in the more premium priced technology.SSD as cache can benefit businesses of all sizes, whether you are a small business trying to get better performance out of your Exchange environment or a large one trying to optimize your Oracle environment. They also can be an effective money saver as well by delivering improved performance without having to add more or faster drives. Suppliers have been quick to jump on this market opportunity and there are now several cache solutions for direct attached storage, network attached storage (NAS) and storage area networks (SAN).

The simplest case is to use an internal SSD either as a drive or a PCIe card inside of a server. Several RAID controller manufacturers have added the ability to support an SSD drive to act as a cache in front of the drives attached to the RAID controller. For the cost of a single SSD drive, or two for redundancy, we've seen dramatic improvements in performance. The SAN can also benefit from cache and the solution does not necessarily have to come from your storage vendor. Several storage virtualization products have the capability to set aside several SSD drives as caching area for reads and in some cases even writes, to existing storage arrays. There are also a few products that are essentially caching switches, and don't require a storage virtualization engine. As is the case with the server example, this can greatly improve performance.

The above two examples cover the block world but NAS systems do not need to be left out. There are several devices that can be placed in front of NAS systems to cache CIFS and NFS traffic as it goes across the network. While many assume that the network is the bottleneck in high performance NAS environments, the thrashing of disk I/O is sometimes the root of the problem; caching should help alleviate that.

Most cache systems are temporal in nature. Meaning that they only hold a unique copy of data for a very short period of time, typically measured in seconds. In fact read only caches never hold a unique copy of data. We are seeing caching devices that are less temporal in nature, meaning they will store a unique copy of data for a longer period of time, maybe minutes or even hours. This is a very interesting development in storage and as we discuss in our article, Architecting Storage Networks for Data Delivery vs. Data Services, has the potential to relegate the current name brand NAS players to deliverers of software components further commoditizing mechanical storage systems.

Probably the more obvious payoff of using SSD as a large cache is helping to fix a performance problem more cost effectively than adding more mechanical drives or replacing those drives with faster drives. Less obvious is using these solutions on an initial purchase, they could very easily allow you to buy a mid-range SAN or NAS instead of a more expensive high end system.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...