Increasingly, front-end code relies on third-party code components provided by vendors and open source projects. This code does not sit in the application. Rather, it is fetched and loaded dynamically by the browser. Using third-party code makes sense in a lot of ways. It allows developers to leverage existing code that is useful, saving them development time.
Unfortunately, some of the most serious cybersecurity attacks target front-end code. Magecart attacks, for example, are executed when malicious hackers inject unauthorized front-end code into an application or modify the application's front-end code. This injected modified code, called a skimmer, collects users' financial information when they try to make a purchase. Magecart attackers abuse vulnerabilities in third-party vendors used by the application or first-party code written by your own developers that allow code modifications and the insertion of skimmers. These attacks often inject minor code modifications, which make them hard to spot with basic code reviews.
How DevSecOps Can Protect Front-end Code
Understanding the intent of code you do not control and can't see is challenging. This is the primary problem of policing third-party libraries for security risks. To tackle these, you need to adapt a two-pronged approach. You need to enable early detection of risks and vulnerabilities during integration time, but you also need to maintain the safety net of a runtime detection engine that monitors live scripts as they run on actual users' browsers and detects unauthorized changes.
To allow developers to test these libraries far earlier, we need to add tools to CI/CD pipelines that can peer inside the library code and validate the code has not been altered to carry malicious payloads like Magecart. At the same time, we need to run user-testing and client-testing of applications as they are being built. More granular testing of how applications should behave in the browser earlier in the process will allow developers to establish a baseline of "known and acceptable app behaviors."
This baseline can then be compared against how code and applications are behaving in production. In fact, security teams can run automated systems that perform deep analysis of thousands of factors identifying behavior deviations from the baseline. Deviations might be caused by new libraries or code that were not added through the standard CI/CD and automated code review process or behavior changes by third-party vendors and libraries. Changes in third-party code may be planned by the vendor due to updates and enhancements. However, these changes may have happened or, in other cases, may be due to an attacker managing to inject malicious code.
Lastly, deviations from baseline might result from changes to your own code that was modified without authorization by hackers, disgruntled employees, or well-meaning employees trying to fix something and bypassing the code and security processes to speed up their work.
This type of baseline testing would identify Magecart attacks very quickly and before they do significant damage. The testing will only work well, however, if developers make recording application behavior prior to deployment a regular part of their work practice. That's really the key - making sure that for developers' front-end security is not tedious and adds minimal work.
How This Might Work in Practice
The good news? Adding security checks and validations to CI/CD pipeline tools and platforms is straightforward, more like adding a new type of quality assurance to the existing suite of tests. Doing this does not take the developers out of their existing workflows, so it has little impact on their productivity. It also does not require additional hours or workers on the information security team.
Here's how the process might flow.
- A developer checks their latest code into their version control system (like GitHub). Before the new code is integrated into the main branch of the application code, the new code will be automatically built and tested with a set of security checks.
- If a security flaw is detected in the code or any of the libraries or third-party modules that the code is using, then the version control system automatically flags the specific lines of code under question and generates a ticket to the developer calling for additional quality assurance along with suggested remediation steps.
- The developer sees the ticket in their normal ticket queue and makes the fixes.
- The code is then run through the test suites again. It will be merged into the main branch of code if all tests pass and then deployed into production.
- The updated application runs in a staging environment for analysis to create a baseline of accepted behaviors that can be checked in an automated fashion against actual live behaviors.
Making every front-end developer a DevSecOps expert creates a far more holistic approach to web application and native application security. This approach is more proactive and preventative - and a lot less expensive and time-consuming over the long haul. Adopting a DevSecOps approach is only one part of a broad and inevitable transition for all developers toward assuming more responsibility for application security - and creating a world where security starts with the code.
- How Enterprises Are Developing Secure Applications
- As DevOps Accelerates, Security's Role Changes
- Nine in 10 Applications Contain Outdated Software Components
Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's featured story: "The Entertainment Biz Is Changing, But the Cybersecurity Script Is One We've Read Before."