Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Application Security

10:30 AM
Caleb Sima
Caleb Sima
Connect Directly
E-Mail vvv

Taming the Chaos of Application Security: 'We Built an App for That'

Want to improve the state of secure software coding? Hide the complexity from developers.

We have decades of secure code development training behind us, the refinement of secure coding practices, and new application security testing and development tools coming to market annually. Yet one of the biggest and oldest barriers to developing secure software remains. What is it?

It's the complexity of these tools, how they are managed, and how they integrate that creates unnecessary drudgery for development teams. There is so much that we force developers to slog through that it slows them down from being able to do their job, which is to develop great software that people want to use.

This is what usually happens: As developers busily do their thing and build new applications and features, someone from security comes in the room and explains that a software assessment tool will be inserted in their pipeline. The tool throws off all types of false positives and irrelevant results, which, from the perspective of developers, only gets in the way and slows them down. If this process was ever scalable, it's certainly not scalable in modern continuous delivery environments.

To succeed, security teams must take radically different — and more effective — approaches to help developers build more secure applications. This was what we did at Capital One, with considerable success.

Streamlining Application Security Processes
We had more than 3,500 different application teams, within seven separate lines of business, developing roughly 3,500 applications within their own continuous delivery pipelines. Each of these teams had significant flexibility regarding the tools that they chose to use and what languages they used for development. While productive, it was a form of managed chaos, with seven fully independent lines of business each essentially doing their own thing.

Our application security team, on the other hand, essentially consisted of a small pool of consultants. We were spending an inordinate amount of time just trying to get the Web application security tools up and running with each of the 3,500 teams. In addition to getting the software security assessment tools in place, the consultants would reach out to each application team to provide consulting and training. With so many development teams and different tools and languages in place, it just wasn't scalable.

Another significant challenge for the application security team was the lack of a stick they could use to enforce good security development hygiene. Our security consultants would reach out to each development team and attempt to engage them for training, consult on effective development security practices, and try to convince them about the need to change practices. There was no way to force the development teams to actually engage and make these efforts work.

Fortunately, every developer and team at Capital One truly wanted to develop secure code. Unfortunately, the processes we had in place were too slow to be reasonably effective and timely. It would take three months from the initial contact with a development team to actually install and train the team on how to use the security assessment software. Not good enough.

The Solution? We Had to Transform the System
So, we did what developers do: we built an app for that. Then we provided teams an option that removed the burden of having to run their own application assessment tools. It was software security assessment-as-a-service for these internal teams, and all developers needed do was sign up.

To secure their code, development teams would log into the system, send a single command through the API, and have themselves and their app registered for assessment. The system would then automatically pull the compiled code artifacts needed for the assessment, identify requirements and policy, and then orchestrate and manage the third-party scanning tools on the back end. The results from the assessment were then fed to the developer's dashboard or pulled/pushed from an API. The good news for developers was that they didn't have to do anything in this process aside from registering the app to get high-quality assessment results for minimum effort.

The application security team also used their expertise to filter assessment results so developers weren't burdened with irrelevant outputs and false positives. What developers received in the results they knew to be legitimate security concerns that needed to be resolved.

Of course, we didn't roll this out all at once. Initially, we worked with one of the two largest teams at Capital One. Our goal in that initial pilot was to learn how to make a service that developers would really want to use. We took a novel approach and actually asked the development teams what they would want — including everything from the user interface to presentation of the results to the level of automation. Not so surprisingly, developers appreciated the fact that the system could be fully automated.

What was amazing was that developers loved the system. In fact, they loved it so much that they started telling other development teams.

We began the pilot at the beginning of 2017, and the initial two teams that used the system began telling other teams. As a result, we witnessed registrations accelerate through word of mouth. By the middle of 2017, the system went from two applications registered to 780 applications registered. All the while we, kept improving the service and developers continued self-registration.

We added additional software security assessment tools. Each of these new tools did different types of assessments better than other tools, so they all complemented each other. What was most exciting was that we could provide enhanced application software security assessments and code review without developers having to change their day-to-day routines even as we added additional assessment and code composition tools.

Results: A Single Pipeline for Code Analysis Tools
The results speak for themselves: the application teams became our customers. And when all was said and done, by the end of 2017, we had 2,600 application development teams enrolled in the system. In contrast, in the year before the system was implemented, the company processed about 12,000 software security assessments. In the year we introduced this system, which we named "security code orchestration," the company ramped up to run that same number of assessments per day and totaled near 400,000 software security assessments for the year.

For application security teams, this is a clear win. First, because all the software security assessment tool sets were abstracted away from the engineering teams, we could add or replace software security tools on the back end of the system with ease. This meant we could instantly improve the quality of assessments by improving the quality of our tools across the entire enterprise. The development teams wouldn't even know a change was made. As we evolved the system, it became the single pipeline for code analysis tools, including non-security-related tasks such as code quality and license compliance. As such, we ended up with going from secure code orchestration to code quality orchestration.

Most importantly, the entire organization was able to move from an ad hoc and a low level of software assessment coverage to more than 80% coverage. And, as it turned out, the software security teams didn't need any kind of a stick — developers wanted a secure code pipeline because it was so easy to use and provided them value.

Related Content:

Caleb Sima has been engaged in the Internet security arena since 1994 and has become widely recognized as a leading expert in web security, penetration testing and the identification of emerging security threats. His pioneering efforts and expertise have helped define the web ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...