As the chief architect and co-founder of an application security company, I find that an important part of the startup journey is reflecting on the evolution in both my personal development and in my industry in general. In 2008, when PCI-DSS became an integral business element for enterprises in Israel, I took part in many initial certifications and witnessed the struggle companies had with becoming compliant.
My main role involved the parts of PCI that related to applications, including penetration testing. As I sat in the offices of one of Israel's largest healthcare providers one afternoon, I recall reviewing a tedious and seemingly never-ending list of questions, which left our team increasingly puzzled. Rather than gain insights and information, I remember leaving the office with more questions than answers.
One thing became very clear to me from this experience: The criteria that security standards expected businesses to uphold were not feasible. They were not realistic in 2008, and they are not realistic today. The PCI-DSS standard required comprehensive evidence collection, and excessive, multidisciplinary assessment of all assets in scope. It took the industry years to adopt the proper strategy to handle PCI-DSS requirements, which was to isolate the PCI environment and reduce it to a minimum.
A Buzzword Is Born
It is no coincidence that around the same time, secure software development life cycle (SSDLC) became the new industry buzzword. By attaching security to every stage of development, its premise made sense: Ensuring security's involvement from the get-go in order to minimize mistakes and secure the development process. SSDLC has become the go-to for many, due to increased cloud migration and the wide-scale adoption of mobile applications. It even survived DevOps unleashing the power of, well, DevOps and metamorphosing SSDLC to the infinity delivery-and-operation loop we use today.
When privacy initiatives hit full throttle, we all witnessed again how difficult it was (and still is) for companies to master control standards like GDPR. However, this time, minimizing the scope was not an option. Suddenly, many teams experienced firsthand the impact of poor design and security debt when they struggled to meet the new regulation requirements.
For those of us who have been tickling and poking applications over the last two decades, this is hardly a surprise. Despite the best intentions, it is impossible to instrument security in all stages of all projects, and it is increasingly difficult to create applications that are resilient to advanced hackers (and penetration testers). Honestly, it is challenging enough to build applications without bugs or downtime, so expectations of regulators and the industry at large must be adjusted.
Simply put, there are not enough human resources to cope, and developers are too busy to be bothered. Surprising or not, this gap embodies the dysfunctional relationship between security and the developer.
Getting Inside Your Head
Elite hackers, such as those behind the SolarWinds attacks, get into the developer's mind and craft a sophisticated play that exploits weak spots in software manufacturing. Hacking applications is the hunt for mistakes that developers are bound to make.
Ideally, with proper training, effective internal communication, security-minded design, and rigorous testing processes, these mistakes will be limited, restricting the impact. But, as we all know, real life doesn't always align with even the most stringent defenses.
Great developers don't think about security. Developers think in features, deadlines, scaling, and velocity. Developers think in terms of production incidents and downtime. But above all, developers are makers, and it takes experience, real intention, and active decision-making to allow security to enter your creative zone.
Security professionals often underestimate the amount of effort developers invest in protecting their applications. Developers constantly implement different layers of resilience to error and failure and are constantly barraged by countless requirements from product, customer success, marketing, and all other stakeholders in the organization.
One of my projects entailed building a scalable program for a company with over a thousand developers. After a taxing process and hours of discussions, we finally agreed on a security service-level agreement. A head engineer smiled and said, in my direction, "You see, security likes to have center stage!" I quickly understood that security will always play the role of crying wolf, whether intended or not.
On the other hand, when you show developers that zero-day vulnerability, everyone will cry "Fire!" and make sure to put it out. Despite what developers may believe, some vulnerabilities do present a real and potentially debilitating threat to business continuity. That's why they need security, whether they like to admit it or not.
So here's the bottom line: The security life cycle isn't a developer's life cycle. Activities such as threat modeling and penetration tests are crucial to the level of security but require many resources that are harder to scale. Especially when operating on an SSDLC model, the management overhead will slow you down even more. On the other hand, while simple automation with application security testing is an effective strategy to collect quantitative feedback regarding your software, it is unlikely to be adopted by the organization without strict governance or a comprehensive security culture.
Security operates in alignment with the software development life cycle but also stretches beyond it. This remains true as the AppSec team keeps challenging all applications, including legacy apps. AppSec goes beyond the code itself to assess software composition, CI pipelines, and runtime. AppSec acts to assess changes their developers make, but also continuously pays attention to new attacks and vulnerabilities, regardless of whether development happens in-house.
AppSec has its own life cycle. And it is different from a developer's.