Application security should be guided by its responsibility to maintain the confidentiality, integrity, and availability of systems and data. But often, compliance clouds the picture.

Joe Ward, Senior Security Analyst, Bishop Fox

December 6, 2018

5 Min Read

Developers and those responsible for information security inside the enterprise have different goals and motivations — and inevitably, those contrasting things clash. Developers bear the pressure of meeting deadlines for new applications and feature sets. Security's intervention in this process includes the introduction of code-scanning tools to ferret out vulnerabilities and bugs before new apps or updates hit production. Security should be guided by its responsibility to maintain the confidentiality, integrity, and availability of systems and data. But often, compliance clouds the picture, and earning a Payment Card Industry Data Security Standard (PCI-DSS) checkmark from a quality security assessor (QSA) trumps all.

There needs to be a middle ground where a well-structured and mature application security program has at its foundation developers who design and build secure systems that meet the organization's needs. By design and innately, the application developer should build and program in such a manner that all the artifacts required by a QSA are front-and-center as minimum standards that are part and parcel of every new build or update.

One need look no further than the Heartland Payment Systems breach of 2008. The details are gory and, at the time, it was the largest breach ever. The personal and payment card data of more than 100 million people were stolen; more than 600 companies were affected; losses total in the hundreds of millions of dollars. And Heartland was compliant with PCI-DSS, having passed its audit two weeks prior to the discovery of the breach.

Attackers had penetrated Heartland not with a zero-day exploit or sophisticated hack. The breach was facilitated by a SQL injection attack against the corporate website, a Heartland web property that does not handle payment card data, and it was not in scope for a PCI QSA's audit, but yet it was used as a pivot point to attack the payment processing systems.

Let that sink in: An application security vulnerability in a non-PCI controlled system was the launching pad for the largest breach at that time. Heartland tried to stand on the merit that it was PCI compliant, but this remains a case study for why PCI-compliant systems and organizations cannot make the claim they have sufficient security controls in place.

Compliance programs are not the same as security programs and should not govern the operation of a security programs. The requirements of PCI, HIPAA, or any other industry standard should serve as a minimum set of security guidelines for an organization, and never the high end. Compliance teams need to meet that minimum set of standards, which are defined by a third party with no context of an organization's unique needs, in order to continue a certain business function and not keep an organization secure necessarily.

So, how does application security and secure development fit in? Each development life cycle must be tailored to business needs, development cadence, and technology stacks used. These five threads can help any organization establish amazing application security programs.

Thread 1: Scanning is not enough; you have to track and treat.
Success is defined by how the security of application development is improved. For starters, scanning for, and reporting on, vulnerabilities is not enough; they must be managed. Tools used by developers in this light must manage the entire vulnerability life cycle, and report on:

  • Current statistics about open vulnerabilities

  • Aging or time to remediate statistics

  • Historical trending of vulnerabilities

  • Security defect density statistics and trends

This tool (which can be any tool that meets the needs of developers) can, ideally, also integrate your infrastructure vulnerability management reports as well. This will give upper management a clear understanding of overall risk and see what areas to specifically target.

Thread 2: Standardize your SDLC.
I'm not just talking about agile vs. waterfall vs. DevOps here. More important to application security is to standardize the security processes that are applied at each phase of development.

When you create a functional software development life cycle (SDLC), it doesn't matter if a team follows a waterfall process with one big-bang deployment per year, or if they are a true continuous integration/continuous deployment pipeline with multiple production pushes per day. Everyone knows you perform a static analysis at build time and remediate any identified vulnerabilities as a security gate to unit testing.

Your SDLC must also include an operational security maintenance process. Too often, project development is abandoned once pushed to production. The developers are off working on the next release and the production code is frozen. What happens when a new vulnerability is identified in production source code? There must be a process to remediate security issues through security maintenance releases.

Thread 3: Use a few approved technology stacks.
Standardizing on a few well-known and mature stacks will allow you to get focused and specialized. Analysts become familiar with language syntax and are able to assist developers. Automation tools may be tailored to target the specific languages and versions. Managing and cataloging third-party libraries and frameworks enables quicker remediation and eliminates the use of rogue libraries that could be leveraged upon disclosure of a major vulnerability.

Thread 4: Get top-down support.
The best application security program in the world will go nowhere if it isn't supported by upper management. Such endorsement lends weight to a program that it's relevant and important. Security must clearly communicate the benefits of secure development to executives and stakeholders and demonstrate value early with metrics demonstrating improvement. As a result, compliance with industry standards should be an outgrowth of the program.

Thread 5: Educate your developers.
The best way to mitigate vulnerabilities in source code is to reduce the number of those introduced in the first place. Unfortunately, most computer science majors never have exposure to even the most basic security practices or principles. It falls on secure development programs to institute a continuing education program for development communities. This should be foundational to security and not click-through training similar to what is required by the compliance community. Educate your developers on how to break their applications by doing lunch-and-learns or hosting a capture-the-flag event. You would be surprised how many developers' jaws drop when they understand what manner of bad things can be done by decoding or injecting values into cookies.

Related Content:

About the Author(s)

Joe Ward

Senior Security Analyst, Bishop Fox

Joe Ward (OSCP) is a Senior Security Analyst at Bishop Fox, where he focuses on red teaming and network penetration testing, as well as security architecture and vulnerability management. He is an active member of Arizona InfraGard, a government alliance that is dedicated to infrastructure protection and mitigating physical and cyber threats. He has given lectures on red teaming and blue teaming for military and defense personnel. Joe's 20 years of IT experience as a network and systems engineer includes extensive knowledge of a range of systems.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights