Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

News

10/29/2013
01:39 PM
George Crump
George Crump
Commentary
50%
50%

Quick Guide To Flash Storage Latency Wars

Because latency is the key performance differentiator in server-side flash, SSD, PCIe and memory bus flash storage vendors are competing on speed.

Server-side flash is all about performance, and the key performance differentiator is latency. Latency is the time required to complete a transaction between a host and a storage system, such as a read or write operation. As a result, a "latency war" has broken out in the flash community about the best way to reduce flash latency.

Latency actually has several levels. The first level is the time it takes the media to position and be ready to respond to an I/O request. With a hard drive, this latency was the end of the discussion, as the milliseconds it took for a hard drive to get in position overshadowed any other latency in the storage communication chain.

Solid state disk (SSD) drives -- which are flash inside a hard drive container -- eliminated that device latency. SSDs can put themselves in position instantly since they have no moving parts, no platters to rotate. But that exposed other areas of latency in the storage protocol stack. For example, the time it takes for the I/O to work its way through the overhead of SCSI became noticeable.

This led to the introduction and rapid adoption of PCIe-based flash. Most of these cards eliminated the storage protocol stack altogether. The communication was direct to the application or operating system over the PCIe bus. Eliminating the storage protocol stack meant that special drivers had to be created for the various operating systems that a data center might have.

Vendors then introduced API sets that would allow users to write directly to the PCIe flash card from within the application for a further reduction in latency, not only avoiding the storage I/O stack but also avoiding the operating system itself.

What PCIe began to show, though, was that there was another level of latency. The PCIe bus itself. The PCIe bus routes through a motherboard-based fabric that allows multiple cards to share PCIe bandwidth. Some higher end servers will have multiple PCIe hubs in order to better route data. But even a high-end server when burdened with lots of I/O can become PCIe bottlenecked and lead to latency. (Remember that PCIe supports more than storage I/O.)

The next tier in latency elimination is to eliminate the PCIe bus altogether. Several vendors are introducing memory-bus-based flash storage. These flash storage devices come in a memory DIMM form factor and can act as storage with a device driver, similar to a PCIe SSD. Even more interesting, with a tweak to server system BIOS, it can act as main memory to the server. Imagine 400 GBs of "RAM" via flash on a single DIMM. Using the memory bus provides even more I/O channels and greater bandwidth; it was, after all, designed to support DRAM.

In both implementation modes, the use cases are very interesting. The ability to create very dense servers with terabytes (if not petabytes) of flash capacity in a 1U system changes the data center design game quite a bit.

There is no single perfect solution, as SSD, PCIe flash and memory flash all have their ideal use cases, and many data centers may have a mixture of all three. Designing applications for extreme high performance with almost zero latency is now reality. But their use is not limited to the performance fringe. For example imagine using these technologies to design a single physical server to support 10,000 plus desktops. We have the processing power available to us, but the latency of storage is no longer the roadblock it always was.

 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
beachscape
50%
50%
beachscape,
User Rank: Apprentice
10/31/2013 | 2:25:02 AM
re: Quick Guide To Flash Storage Latency Wars
Great summary describing latency in storage. It would be good to have a table comparing latencies of HDDs, SSDs, PCIe and flash memory DIMMs.
Joe Stanganelli
50%
50%
Joe Stanganelli,
User Rank: Ninja
10/31/2013 | 3:58:11 AM
re: Quick Guide To Flash Storage Latency Wars
Of course, this all stands to change over the next several years as SSDs gradually become obsolete (except, possibly, in hybrid setups) under Moore's Law. http://www.enterpriseefficienc...
D. Henschen
50%
50%
D. Henschen,
User Rank: Apprentice
10/31/2013 | 10:49:46 AM
re: Quick Guide To Flash Storage Latency Wars
I heard an enlightening cost comparison the other day by Ari Zilka, CTO at Hortonworks. 1Terabyte of RAM = $70,000; 1TB of Flash = $8,000 to $20,000, depending on quality/discounts; 1 TB of hard drive = $60 to $100 depending on quality/performance. Maybe somebody would quibble with the exact figures, but it's clear there is a cost to real-time performance.
samicksha
50%
50%
samicksha,
User Rank: Apprentice
10/31/2013 | 11:25:23 AM
re: Quick Guide To Flash Storage Latency Wars
Any review on Tape storage,as i don't think it going away any time soon, as it is still the most cost-effective and simplest way to archive huge amounts of data.
COVID-19: Latest Security News & Commentary
Dark Reading Staff 7/2/2020
Ripple20 Threatens Increasingly Connected Medical Devices
Kelly Sheridan, Staff Editor, Dark Reading,  6/30/2020
DDoS Attacks Jump 542% from Q4 2019 to Q1 2020
Dark Reading Staff 6/30/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
How Cybersecurity Incident Response Programs Work (and Why Some Don't)
This Tech Digest takes a look at the vital role cybersecurity incident response (IR) plays in managing cyber-risk within organizations. Download the Tech Digest today to find out how well-planned IR programs can detect intrusions, contain breaches, and help an organization restore normal operations.
Flash Poll
The Threat from the Internetand What Your Organization Can Do About It
The Threat from the Internetand What Your Organization Can Do About It
This report describes some of the latest attacks and threats emanating from the Internet, as well as advice and tips on how your organization can mitigate those threats before they affect your business. Download it today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-9498
PUBLISHED: 2020-07-02
Apache Guacamole 1.1.0 and older may mishandle pointers involved inprocessing data received via RDP static virtual channels. If a userconnects to a malicious or compromised RDP server, a series ofspecially-crafted PDUs could result in memory corruption, possiblyallowing arbitrary code to be executed...
CVE-2020-3282
PUBLISHED: 2020-07-02
A vulnerability in the web-based management interface of Cisco Unified Communications Manager, Cisco Unified Communications Manager Session Management Edition, Cisco Unified Communications Manager IM & Presence Service, and Cisco Unity Connection could allow an unauthenticated, remote attack...
CVE-2020-5909
PUBLISHED: 2020-07-02
In versions 3.0.0-3.5.0, 2.0.0-2.9.0, and 1.0.1, when users run the command displayed in NGINX Controller user interface (UI) to fetch the agent installer, the server TLS certificate is not verified.
CVE-2020-5910
PUBLISHED: 2020-07-02
In versions 3.0.0-3.5.0, 2.0.0-2.9.0, and 1.0.1, the Neural Autonomic Transport System (NATS) messaging services in use by the NGINX Controller do not require any form of authentication, so any successful connection would be authorized.
CVE-2020-5911
PUBLISHED: 2020-07-02
In versions 3.0.0-3.5.0, 2.0.0-2.9.0, and 1.0.1, the NGINX Controller installer starts the download of Kubernetes packages from an HTTP URL On Debian/Ubuntu system.