News
4/16/2014
02:00 PM
Jeff Williams
Jeff Williams
Commentary
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail
100%
0%

The Real Wakeup Call From Heartbleed

There's nothing special about Heartbleed. It's another flaw in a popular library that exposed a lot of servers to attack. The danger lies in the way software libraries are built and whether they can be trusted.

In case you live under a rock, a serious security flaw was disclosed last week in the widely used OpenSSL library. On a threat scale of 1 to 10, well known security expert Bruce Schneier rated it an 11. Essentially, an attacker can send a "heartbeat" request that tricks the server into sending random memory contents back to the attacker. If the attacker gets lucky, that memory contains interesting secrets like passwords, session IDs, Social Security numbers, or even the server’s private SSL key.

Peeking under the covers of the software world
This is not yet another article discussing how to check yourself for Heartbleed, how to remediate it, or trying to figure out how this could have possibly happened. I’m more interested in whether we learn anything from this wakeup call, or whether we just hit the snooze button again.

There was nothing particularly special about Heartbleed. It’s just another flaw in a popular library that exposed a lot of servers to attack. Let’s take a look at how these libraries are built and ask whether they can be trusted with our finances, businesses, healthcare, defense, government, energy, travel, relationships, and even happiness.

Libraries are eating the world
In 2011, Marc Andreessen, who founded Netscape, wrote an excellent essay, "Software Is Eating the World," in which he describes how whole industries, like photography, film, marketing, and telecom are being devoured by software-based companies. I credit the widespread availability of powerful libraries with enabling developers to create incredible software much more quickly than they could on their own. In fact, new tools that provide what is known as “automated dependency resolution” allow libraries to build on other libraries, magnifying the “standing-on-the-shoulders of giants” effect.

Today, there are 648,740 different libraries in the Central Repository, a sort of open-source clearinghouse where developers can download software components for use in their applications. A typical web application or web service will typically use between a few dozen and a few hundred of these components. Remember, all of these components have the ability to do anything that the application can do. A component that is supposed to draw buttons is capable of accessing the database. A math library is capable of reading and writing files on the server. So, a vulnerability in any of these libraries can expose the entire enterprise.

The zero-assurance software supply chain
You can think of all this code as a sort of supply chain for software. Modern applications are assembled from components, custom business logic, and a lot of "glue" code. In the real world, supply chain management is used to ensure that components used in making products actually meet certain standards. They come with material data safety sheets, test results, and other ratings. This whole process is managed to ensure that the final product will work as expected and be safe to use 

But there is no assurance in today’s software supply chain. There are plenty of security features, but that’s not assurance. Assurance evidence comes from activities that tell you if the defenses are any good. Direct evidence is derived from verification or testing of the application itself. Indirect evidence tells you about the people, process, and technology that created the code. Wouldn’t it be nice if it were possible to choose components based on whether the project takes security seriously and can prove it? Today, that’s impossible. There simply is no framework for capturing and communicating assurance.

Don’t hate the playa – hate the game
It’s tempting to think that Heartbleed is an isolated incident created by a single developer mistake. In fact, Theo de Raadt, the founder of OpenBSD, writes that wrongheaded attempts to improve performance prevented standard security protections from working, and concludes that “OpenSSL is not developed by a responsible team.”

I don’t believe in blaming a team of volunteer developers who build software and give it away for free. Actually, I’d like to take this opportunity to thank the OpenSSL team for its hard work and offer my support. Our challenge is how to help all software projects to be more like OpenBSD, whose security page provides considerably more evidence than most projects.

It’s time to admit it – we have a library security problem
Please don’t misunderstand. This isn’t about open or closed source. I am a huge supporter of open-source. I’ve written it, donated my work, and run a large international open-source foundation for years. Open-source has the opportunity for a better assurance case, but it’s just not good enough to say that something is secure solely because the source is available. The fact that a bug like #heartbleed can exist for years without being discovered is all the proof you should need.

There are three serious kinds of problems with libraries that everyone should be concerned about:

  • Known vulnerabilities. These are problems discovered by researchers and disclosed to the public. All you have to do is make sure you monitor and keep up with the latest versions of your libraries. Read more
  • Unknown vulnerabilities. These are the latent problems that have not yet been discovered or disclosed publically. For these, you should select libraries written by teams with the best assurance case, including evidence about design, implementation, process, tools, people, and testing. Would you trust your business to them?
  • Hazards. These are powerful library features that have a legitimate use, but can expose your enterprise if used incorrectly. For these, developers need guidance on using the library safely. Look for libraries that provide guidance on safe use.

Unfortunately, the information required to address the library security challenge isn’t widely available. That means architects and developers can’t make informed decisions about what components to include in applications. I think we all need to do a better job of asking software projects to provide the assurance evidence we need.

As software continues to eat the world and becomes even more critical in everyone’s lives, we will either figure out a way to generate assurance and communicate it to those who need it, or we’ll keep making bad choices and experiencing increasingly damaging breaches. The FDA "Nutrition Facts" label was initially scoffed at and took decades to become popular. What do you think? Would a “Software Facts” label catch on? 

A pioneer in application security, Jeff Williams has more than 20 years of experience in software development and security. Jeff co-founded and is the CTO of Aspect Security, an application security consulting firm that provides verification, programmatic and training ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
planetlevel
100%
0%
planetlevel,
User Rank: Author
4/20/2014 | 12:38:43 PM
Re: But assurance equals accountability equals liability.
I think you're confusing insurance with assurance. If I provide some kind of warranty, guarantee, or underwriting of a piece of software, then sure, I'm on the hook.  But I disagree with your assertion that assurance necessitates this type of liability.

I don't view assurance as a black-and-white situation.  And neither did the authors of the Orange Book.  All the problems you mention could be easily verified by a set of automated tests delivered with the software.  You don't have to get all the way to formal methods and proof-carrying code to gain assurance.

Give me some software with some reproduceable test cases, the results of some testing tools, let me know a little bit about who wrote it, what tools they used, and their process and I'm probably in a lot better situation than if I just download some code off the Internet.
Interesting
50%
50%
Interesting,
User Rank: Apprentice
4/18/2014 | 7:15:30 PM
Re: But assurance equals accountability equals liability.
As an IT Auditor who worked with one of the big five (before they became the big four) global audit firms, I can only suggest that our definitions of assurance are vastly different. Arthur Andersen certainly found it out the hard way. Would you bet all your assets on making an assurance claim? That's assurance in writing, which is what my post refers to. Legal liability for making such an assurance. Any other form of assurance is just hot air if not in writing, IMHO. As before, testing may give the tester some assurance, but others just have to trust the tester's results. Considering that compilers can optimize security code out of the app, thereby leaving the app open to compromise, then reviews of the code are not 100% guaranteed to find bugs (since compiler optimizations may remove that code altogether at compile time, therefore it never makes into the binary). Thus assurance relies on library versions, compiler versions, compiler flags (very important), the platform it's built on, how the libraries were built, library compiler flags, ... It's non-trivial to provide assurance that is actually worth anything without providing a signature on a legal document to that effect. Would you really risk all your personal assets to assure some large Government that your software project has no bugs? Finding a new bug after you've given assurance then makes your assurance worthless. Yet we keep finding new bugs. It is difficult and expensive (if not impossible with a large enough infrastructure) to prove the non-existence of any bugs. Languages like Ada try to help in this regard, but writing something non-trivial in Ada is beyond most coder's tolerance levels. I feel that being able to sue for providing false or misleading assurance (once a new bug has been found in the assured codebase) will not help open code development in any useful way. It is likely to have the opposite effect. And assurance without any ability to sue is meaningless, as there is zero risk to the assurer in that case. Thus every two bit developer will provide that sort of assurance. Individuals and companies are welcome to obtain levels of assurance commensurate with their risk appetite and budget. But requiring some other entity to provide assurance is simply buck passing. Due diligence is the responsibility of the user, not the provider. I agree that it is a truly desirable feature, and in the Utopian ideal world we would have assurance for every line of code. We're not in Utopia yet though...far from it.
planetlevel
100%
0%
planetlevel,
User Rank: Author
4/18/2014 | 6:19:49 PM
Re: Assurance Evidence
I wrote a nice tool to generate these labels automatically.  If anyone is interested in making an open-source tool out of it, I'd be happy to share.
Marilyn Cohodas
50%
50%
Marilyn Cohodas,
User Rank: Strategist
4/18/2014 | 5:10:06 PM
Re: But assurance equals accountability equals liability.
I really like your analogy comparing software assurance to a supply chain for software. I don't think it's too much to ask that  we hold software to the same standard: that the components that go into developing bsiness logic, code etc. adhere to some industry agreed upon baseline. 
Marilyn Cohodas
100%
0%
Marilyn Cohodas,
User Rank: Strategist
4/18/2014 | 3:58:16 PM
Re: Assurance Evidence
Love the label!

planetlevel
100%
0%
planetlevel,
User Rank: Author
4/18/2014 | 3:36:01 PM
Re: Assurance Evidence
I'm glad you asked.  Here's a presentation I did a few years ago proposing just such a thing.

https://www.owasp.org/images/1/17/2010-11_OWASP_Software_Labels.pptx

What's fascinating to me is that the actual label doesn't seem to matter much.  Even if software *users* don't care about the label... the fact that there is a label means that companies *producing* software will do a better job so that it doesn't look awful on the label.
planetlevel
50%
50%
planetlevel,
User Rank: Author
4/18/2014 | 3:33:31 PM
Re: But assurance equals accountability equals liability.
I agree with you that liability is not helpful for application security, although there are many who keep bringing up the idea.   However, I don't think that assurance necessarily implies accountability or liability.

I worked on the OWASP Enterprise Security API project for several years, and we provided a large test suite (thousands of tests), ran static analysis tools on the code, had the code manually reviewed by several large companies, and talked the NSA into doing their own review (they made no code changes).  You can choose to accept or reject any or all of that evidence, but I believe that it adds assurance.  And none of it was particularly difficult to do.  The test cases in particular *saved* us considerable time.

You don't have to jump to formal methods to get assurance.  In most cases, a modicum of evidence that basic security tests were performed would be a radical improvement.

However, I don't agree that assurance work necessarily increases anyone's liability. The license can still be a standard free and open software licence.  You don't have to warrant that your code is fit for any particular use.  The point of the article is just that unless the market starts demanding better assurance, heartbleed-style vulnerabilites will be with us forever.
Interesting
50%
50%
Interesting,
User Rank: Apprentice
4/17/2014 | 7:12:39 PM
But assurance equals accountability equals liability.
Agree that assurance is a desirable feature. But assurance implies accountability, which implies liability. If the software is developed for free and given away for free, and the license clearly warns you that it is provided "as is", etc then it is up to you, the user of the software, to determine it's suitability for your purpose. ie: if you want assurance on a free product, then you are free to obtain it, using whatever funds you have available to employ appropriate developers and testers. Or do you wish to make the developers liable for damages as a result of your use of their free library (which as above already contains the "Use at own risk, supplied as is, where is, ..." disclaimer to prevent that exact scenario)? If giving away free software suddenly came with a legal liability (the assurance), then it would almost certainly cease all open source development efforts immediately, since few individuals have the necessary funds to fend off a liability claim of the magnitude of this recent bug. The other ways of gaining assurance involve very expensive code audits and mountains of testing (neither of which actually prove the non-existence of bugs, it only proves that the tests and audits found no bugs. Two very different things.) If you truly want assurance then it can be achieved only via program proving, but going down this path could be frightfully expensive. No software vendors have ever, in my experience, provided an assurance that the software works correctly in all scenarios. Specifically they say the exact opposite in almost every software license I've ever read. Most if not all contain the words: "This software is provided as is. Use at own risk. No guarantees on fitness for purpose." This includes Microsoft, IBM, Oracle, Amazon.... If none of the big vendors will/can do it... If they do, then there is always an escape clause that allows them the opportunity to fix it without being liable. So there goes any assurance provided by that agreement, as the only assurance provided is that bugs will be fixed in a timely fashion, not that bugs do not already exist in the code.
Marilyn Cohodas
50%
50%
Marilyn Cohodas,
User Rank: Strategist
4/17/2014 | 9:09:49 AM
Assurance Evidence
Curious, Jeff. What do you think a "Software Facts" label should include?
rjthomas01
50%
50%
rjthomas01,
User Rank: Apprentice
4/17/2014 | 3:47:42 AM
Good balanced view of software
Nice article; maybe- just maybe- Heartbleed will wake people up that no form of IT is a magice bullet impervious to faults. We're lukcy this time in that we're primarily a Windows shop and have therefore bypassed OpenSSL, but that's really just luck. Next time it oculd be a fault with SChannel, or something else. The potential for faults is limitless. The keys are to patch (preferably automatically), have multiple security gateways and try to embed security practices throughout an organization.
Register for Dark Reading Newsletters
Partner Perspectives
What's This?
In a digital world inundated with advanced security threats, Intel Security seeks to transform how we live and work to keep our information secure. Through hardware and software development, Intel Security delivers robust solutions that integrate security into every layer of every digital device. In combining the security expertise of McAfee with the innovation, performance, and trust of Intel, this vision becomes a reality.

As we rely on technology to enhance our everyday and business life, we must too consider the security of the intellectual property and confidential data that is housed on these devices. As we increase the number of devices we use, we increase the number of gateways and opportunity for security threats. Intel Security takes the “security connected” approach to ensure that every device is secure, and that all security solutions are seamlessly integrated.
Featured Writers
White Papers
Cartoon
Current Issue
Dark Reading's October Tech Digest
Fast data analysis can stymie attacks and strengthen enterprise security. Does your team have the data smarts?
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2013-7407
Published: 2014-10-22
Cross-site request forgery (CSRF) vulnerability in the MRBS module for Drupal allows remote attackers to hijack the authentication of unspecified victims via unknown vectors.

CVE-2014-3675
Published: 2014-10-22
Shim allows remote attackers to cause a denial of service (out-of-bounds read) via a crafted DHCPv6 packet.

CVE-2014-3676
Published: 2014-10-22
Heap-based buffer overflow in Shim allows remote attackers to execute arbitrary code via a crafted IPv6 address, related to the "tftp:// DHCPv6 boot option."

CVE-2014-3677
Published: 2014-10-22
Unspecified vulnerability in Shim might allow attackers to execute arbitrary code via a crafted MOK list, which triggers memory corruption.

CVE-2014-4448
Published: 2014-10-22
House Arrest in Apple iOS before 8.1 relies on the hardware UID for its encryption key, which makes it easier for physically proximate attackers to obtain sensitive information from a Documents directory by obtaining this UID.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Follow Dark Reading editors into the field as they talk with noted experts from the security world.