Operations
7/2/2014
12:00 PM
Jeff Williams
Jeff Williams
Commentary
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail
50%
50%

Why Your Application Security Program May Backfire

You have to consider the human factor when you're designing security interventions, because the best intentions can have completely opposite consequences.

In security we have a saying: “Why do cars have brakes? So they can stop? No, so they can go fast!” Practiced badly, security can bring successful software projects to a screeching halt. Creating “security gates” for software projects, compliance reviews, and reporting phantom “false alarm” risks can kill a healthy relationship between security and development teams. But security doesn’t have to be about hindering business. Done right, application security programs are designed to get people working together in a way that is compatible with software development. The goal is to find solutions that allow business to go fast and be secure.

You know how a squirrel tries to get out the way of a car? Jump left. Zip right. Run directly under the tire. That’s how squirrels attempt to deal with a risk introduced by new technology (the car) using the predator avoidance mechanism designed by nature. It doesn’t work well because squirrels are marginally worse than humans at judging technology risk. Noted security expert Bruce Schneier argues that humans are fundamentally “bad at accurately assessing modern risk. We’re designed to exaggerate spectacular, strange, and rare events, and downplay ordinary, familiar, and common ones.”

But, if you really think about it, Mother Nature is just slow. We’re bad at IT risk because our defenses mostly evolved 100,000 years ago in small family groups in Africa. Given enough time, evolution adapts to new risks just fine. Species create remarkable defenses that are compatible with each organism. The question is: Can we speed up this process so that we can adapt more quickly to a rapidly changing technology risk environment?

Security and business evolve together
Usually, the software world just adopts new technology without thinking much about security. I’ve done hundreds of security reviews of products that were purchased without doing any security analysis -- and it’s usually not pretty. it’s not just products either. Organizations adopt new application frameworks, libraries, languages, and more without doing any security analysis. Even huge new features, like HTML5, show up in our browsers before any serious security work is done. This is a painful path forward, but eventually we hack and patch our way to achieve a “just barely good enough” level of security.

In my youth, I thought that enough design, architecture, and formal modeling could secure anything. But I’ve come to understand that the only way to security is through an evolutionary process with “builders” and “breakers” constantly challenging the status quo. Compliance efforts don’t engender this evolution, which is one reason they are often reviled. What’s not yet clear is exactly what does lead one organization to a great security culture, while another with the same practices struggles and makes little headway.

Safety and security make us take more risks
We put a lot of work into safety and security. But how do we know any “best-practices” are actually making us more secure? Before you answer, you should consider that a number of studies have shown that drivers of vehicles with ABS tend to drive faster, follow more closely, and brake later, accounting for the failure of ABS to result in any measurable improvement in road safety. So we are getting places faster without improving our safety.

This counter-intuitive outcome is called “Risk Compensation” or “Risk Homeostasis.” It turns out that people seem to be inherently wired with a certain level of risk tolerance.  Study after study supports this idea. Bike helmets have been show to make cyclists ride faster, even increasing the fatality rate in some studies. Seat belts make you drive less carefully. Ski helmets result in more aggressive skiing. Safer skydiving equipment, children’s toys, even condoms -- all show an almost intentional desire to bring the accident and fatality rates back to their original levels.

People’s perception is obviously critically important to changing behavior. What do you notice about the pedestrians in the picture below?

Did you notice that those people are walking around without any traffic signs, warning, road markings, curbs, and crosswalks? Security madness! If this were an audit, their area would be shut down and sent for remediation. However, this “shared space” was consciously designed to increase the level of uncertainty for drivers and other road users. Amazingly, this approach has been found to result in lower vehicle speeds and fewer road casualties.

The perception of protection
Wait… What!? Protection technology makes people take more risks? Removing safety markings makes roads safer? Yep. You have to consider the human factor when you’re designing security interventions. Your best intentions could have completely opposite and unintended consequences. For example, maybe your new web application firewall gives developers a false sense of security and they stop doing input validation. Or maybe you rely on automated security testing that has so many false alarms that people start to ignore the results.

These problems start when the perception of security isn’t in balance with the reality of security protections. When perception is greater than the protection, people have a false sense of security and take unnecessary risks. On the other hand, when the protection exceeds the perception, the business will shy away from profitable activities.

Creating a culture of accelerated security evolution with transparency
Ultimately, we all want to achieve that elusive culture that makes security a part of information technology without everybody ending up mad. I think we can all agree that building stuff and trying to make it secure later isn’t the right approach. But neither is blindly following a process model that just aggregates a bunch of guesses about what might work.

The path to security that works is rapid evolution – the “builders” and “breakers” working to push security forward. The key to speeding up this process is to make security transparent, so that the perception of security matches up with the reality. The keys to transparency are:

  • Starting with a model. You have to start with an “expected model” of what you think your defenses should be. It doesn’t have to be perfect, as your model will evolve over time.
  • Getting coverage. Verify your expected model across all your applications to an appropriate level of rigor. Establish a security sensor network that can monitor your entire portfolio.
  • Getting continuous. Security visibility has a terrifically short half-life. Today’s highly accelerated software processes demand real-time feedback to developers and other stakeholders.

If you have processes that aren’t demonstrably effective within this framework, cut them. You’ll get a more effective security program and probably save a lot of time and money. What are your techniques for creating a strong security culture? Let me know how you know it works in the comments! Good luck.

A pioneer in application security, Jeff Williams has more than 20 years of experience in software development and security. Jeff co-founded and is the CTO of Aspect Security, an application security consulting firm that provides verification, programmatic and training ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
planetlevel
50%
50%
planetlevel,
User Rank: Author
7/16/2014 | 6:08:32 PM
Re: "shared space" philosophy versus reality
True.  But the point is that even though we aren't sure they help, we do all these security activities in the name of secure code.  I'm suggesting that we need to study the effectiveness of these activities because they *could* easily be undermining overall security.  It's not science to just "do what everyone else is doing" -- that ends up with expensive programs that everyone hates (sound familiar?)
planetlevel
50%
50%
planetlevel,
User Rank: Author
7/16/2014 | 6:05:42 PM
Re: Why Your Application Security Program May Backfire
Absolutely agree that taking a positive approach to security is easier, faster, and more accurate.  It's the key to getting continuous.
JasonSachowski
50%
50%
JasonSachowski,
User Rank: Author
7/11/2014 | 9:51:33 AM
Re: Why Your Application Security Program May Backfire
Nice read @JeffWilliams.  I tend to agree that the "know-do" gap of what users know versus what they do largely contributes of what you're discussing.  This begs the questions do we need to improve our security awareness programs in an effort to reduce this gap?

Well if you look at bike helmets or vehicle ABS in the context of managing security risk, perhaps this stems from how we continue to use traditional security approaches in a very different threat landscape.

Going to your comment about "Removing safety markings makes roads safer" brings me back to the idea that the use of positive security is much more effective in reducing risks in today's technology world.
SSCHWELGIEN441
100%
0%
SSCHWELGIEN441,
User Rank: Apprentice
7/6/2014 | 12:07:30 PM
"shared space" philosophy versus reality
There is growing evidence that initial claims for reduced traffic sppeds and greater pedestrian safety in "shared spaces" is overstated.

See: Shared space - research, policy and problems - http://eprints.uwe.ac.uk/17937/8/tran1200047h.pdf

There is evidence that pedestrians and bicyclists "feel" less safe and adopt avoidance behaviors, sometimes avoiding "shared spaces" altogether.  Many advocates for the blind and physically impaired raise strong voices against "shared spaces" as extremely dangerous places to navigate, again urging those on behalf of whom they advocate to avoid such spaces. 

Seems that "sharing" favors drivers more than pedestrians and other users.
Register for Dark Reading Newsletters
Partner Perspectives
What's This?
In a digital world inundated with advanced security threats, Intel Security seeks to transform how we live and work to keep our information secure. Through hardware and software development, Intel Security delivers robust solutions that integrate security into every layer of every digital device. In combining the security expertise of McAfee with the innovation, performance, and trust of Intel, this vision becomes a reality.

As we rely on technology to enhance our everyday and business life, we must too consider the security of the intellectual property and confidential data that is housed on these devices. As we increase the number of devices we use, we increase the number of gateways and opportunity for security threats. Intel Security takes the “security connected” approach to ensure that every device is secure, and that all security solutions are seamlessly integrated.
Featured Writers
White Papers
Cartoon
Current Issue
Dark Reading's October Tech Digest
Fast data analysis can stymie attacks and strengthen enterprise security. Does your team have the data smarts?
Flash Poll
10 Recommendations for Outsourcing Security
10 Recommendations for Outsourcing Security
Enterprises today have a wide range of third-party options to help improve their defenses, including MSSPs, auditing and penetration testing, and DDoS protection. But are there situations in which a service provider might actually increase risk?
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2013-0334
Published: 2014-10-31
Bundler before 1.7, when multiple top-level source lines are used, allows remote attackers to install arbitrary gems by creating a gem with the same name as another gem in a different source.

CVE-2014-2334
Published: 2014-10-31
Multiple cross-site scripting (XSS) vulnerabilities in the Web User Interface in Fortinet FortiAnalyzer before 5.0.7 allow remote attackers to inject arbitrary web script or HTML via unspecified vectors, a different vulnerability than CVE-2014-2336.

CVE-2014-2335
Published: 2014-10-31
Multiple cross-site scripting (XSS) vulnerabilities in the Web User Interface in Fortinet FortiManager before 5.0.7 allow remote attackers to inject arbitrary web script or HTML via unspecified vectors, a different vulnerability than CVE-2014-2336.

CVE-2014-2336
Published: 2014-10-31
Multiple cross-site scripting (XSS) vulnerabilities in the Web User Interface in Fortinet FortiManager before 5.0.7 and FortiAnalyzer before 5.0.7 allow remote attackers to inject arbitrary web script or HTML via unspecified vectors, a different vulnerability than CVE-2014-2334 and CVE-2014-2335.

CVE-2014-3366
Published: 2014-10-31
SQL injection vulnerability in the administrative web interface in Cisco Unified Communications Manager allows remote authenticated users to execute arbitrary SQL commands via a crafted response, aka Bug ID CSCup88089.

Best of the Web
Dark Reading Radio
Archived Dark Reading Radio
Follow Dark Reading editors into the field as they talk with noted experts from the security world.