Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Cloud

4/24/2012
01:03 AM
50%
50%

Insecure API Implementations Threaten Cloud

Web and cloud services allow third-party access by exposing application programming interfaces, but many developers and customers do not adequately secure the keys to the cloud and their data, experts say

Attackers over the past three years have begun to actively target the digital keys used to secure the Internet infrastructure. Stuxnet's creators stole code-signing keys and then used them to allow the malware to more easily evade host-based security. An alleged Iranian hacker broke into a partner of registry Comodo and bought Secure Sockets Layer (SSL) keys for major domains to eavesdrop on activists. And unknown attackers stole important information on RSA's SecureID token, a device that generates one-time keys to strengthen online security.

The unique codes that applications in the cloud use to identify one another could be next, security experts say.

So-called API keys are used by Web and cloud services to identify third-party applications using the services. If service providers are not careful, an attacker with access to the key can cause a denial-of-service or rack up fees on behalf of the victim.

"It was created as a fairly nonauthoritative identifier -- it was only there to identify applications or the application's use of an API," says K. Scott Morrison, chief technology officer of Layer7 Technologies, a provider of Web security and governance products. "The problem is that developers have started using API keys for stuff that matters."

The problem is not any inherent weakness in the keys, but that developers use them for security when they ought not, he says. In many implementations, the keys are used to identify users, even though the technology was not meant as a way to authorize access to data. And after expanding the power of the keys, developers do not treat them as critical assets. Instead, companies fail to keep track of the keys, e-mailing them around and storing them on desktop hard drives.

"They shouldn't be used for anything that matters, but people do. And when they do, they don't take it as far as they need to," Morrison says. "It's kind of the worst of both worlds."

During a presentation at the RSA Security Conference earlier this year, Morrison stressed the danger in the misuse and mishandling of API keys. The warning was repeated at the recent SOURCE Boston conference by application gateway maker Vordel. An improper implementation that allows simple access to an API via use of a secret key can allow attackers to have unmitigated access if the key can be sniffed out or stolen from an authorized user's computer, said Jeremy Westerman, Vordel's director of product management, at the conference.

"There is a need to protect these cloud API keys," Westerman said. "There is a lot of awareness in the industry about protecting, say, SSL keys ... Unfortunately, protecting API keys has not reached that level of awareness."

Cloud and Web service developers must first follow best practices in opening up their APIs to third parties. In return, third-party developers need to handle the keys in a secure manner and not, for example, encode a nonobfuscated key into an application.

[Microsoft Research report shows how risky single sign-on can be without solid integration and better support from Web service providers like Google and Facebook. See Web Services Single Sign-On Contain Big Flaws.]

Communicating best practices can go a long way to fixing the issues, says Mark O'Neill, Vordel's chief technology officer.

"The SaaS [software-as-a-service] providers expect you to protect these keys, but they don't tell you how to protect the keys," O'Neill says.

Companies that have API keys should treat them as valued assets, he says. The keys should be handled in much the same way as code-signing keys and other encryption material.

API keys were first used by Google, Yahoo, and other early pioneers of Web services. However, as the model moved from standalone sites to Web 2.0 mashups and the companies exposed their services for use by other websites, the weaknesses of API keys quickly became evident. Companies began to implement different schemes for application and user authentication, including OAuth, the Security Assertion Markup Language (SAML), and hashed-based authentication codes (HMACs).

The stronger authentication methods should be used for securing sensitive data, and each token should have a reasonable expiration time. In addition, because secret keys are occasionally exchanged, communications should always be over SSL, says Gregory Brail, vice president of technology for Web technology and services firm Apigee.

"The developer needs to understand the limitations and understand the best practices around implementing API keys," he says.

Developers should still use API keys, Brail says. They should just use them for their proper function and use other tools as the situation demands.

"I'm not saying that there is nothing that can go wrong here; I'm saying that this is not a reason to throw away your API keys," Brail says. "They are an important part of your whole security system."

Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...