Top 5 'Need to Know' Coding Defects for DevSecOps
Integrating static analysis into the development cycle can prevent coding defects and deliver secure software faster.
Security practitioners are accustomed to intervening at the end of the software development process to identify security vulnerabilities, many of which could have been prevented with earlier intervention. To address this problem, developers who are already under pressure to deliver increasingly complex software faster and less expensively are being recruited to implement security earlier in the development cycle under the "shift-left" movement.
To understand the obstacles facing developers in meeting new security requirements, consider the five most common coding defects and how to address them.
1. Memory Errors
Errors in reading memory can potentially impact confidentiality and integrity by leaking sensitive information, while errors in writing memory can potentially subvert the flow of execution, which can affect all three components of the security triad: confidentiality, integrity, and availability. Common examples include buffer overrun/underrun and use-after-free (UAF) errors. Even the most skilled programmers can inadvertently generate these underlying flaws, which are difficult to detect and can be found even in well-tested, safety-certified code. While coding standards are often employed to reduce memory errors, they are not sufficient. Deep static analysis, data flow analysis, and symbolic execution are absolutely required to detect memory errors early in the development cycle.
2. Programming Errors
This class of errors is primarily caused by incorrect use of C/C++, such as uninitialized variables, double freeing of pointers, and implicit conversion between signed and unsigned variables. Programming errors, some of which can be exploitable, may not manifest during functional and regression testing, even if they result in corrupt program state. Nevertheless, they can lead to serious problems in deployed systems. Static analysis can identify coding errors and misunderstandings in programming language semantics.
3. Dangerous Function Calls
Certain API functions are considered potentially harmful and unsecure. The gets() function in C/C++ is a good example, as it can easily produce destination buffer overflow conditions that can impact integrity. Other function calls may have implementation-specific behaviors that make them dangerous. Dangerous function calls are easily identified using static analysis that can search for a list of dangerous functions.
4. Misuse of Cryptography
Cryptographic functions are an important part of keeping data confidential, whether in motion or at rest. However, few developers are experts in cryptography, and the misuse of C library cryptographic functions can lead to security issues, especially the use of weak algorithms, like Data Encryption Standard (DES) and MD5, or the misuse of crypto. Other examples include the use of hardcoded keys or salt data for hashes. The misuse of cryptography can impact confidentiality and integrity. Fortunately, these issues are easy to identify using static analysis.
5. Tainted Data Issues
Tainted data presents one of the most challenging issues for developers, and it, too, can impact integrity and confidentiality. At its core, tainted data is a condition where data that flows into a system is not validated to eliminate malicious elements and ensure it is within the expected value range. Detecting data-injection vulnerabilities is very hard using human inspection after the fact.
In order to detect tainted data issues, data that flows into the system through any form of input (e.g., users, devices, or sockets) needs to be traced from its source (where it enters the software) to its sink (where it's used). And before this data is used in API calls, to access data structures, or in any part of the programming logic, it needs to be validated. Otherwise, it could lead to data injection exploits, such as format string injection, Lightweight Directory Access Protocol (LDAP) injection, or SQL injection. Static analysis can compute through these flows and provide easy-to-understand warnings to help programmers prevent dangerous situations. To do this well, static analysis must perform data flow analysis and abstract execution to evaluate which execution flows are possible.
Static Analysis for Detecting Vulnerabilities
Static analysis, also known as static application security testing (SAST), inspects an application's source and binary code to detect possible security vulnerabilities, including the top five coding errors above. Since SAST can be used within developers' continuous integration/continuous development (CI/CD) workflows, it supports and does not slow down agile development processes. In fact, it can accelerate software development and reduce costs by discovering flaws while a developer is writing code, so they can be fixed before testing and well before an application goes into production. As such, SAST serves a critical function for improving code security and should be part of any "shift left" DevSecOps effort.
About the Author
You May Also Like
Cybersecurity Day: How to Automate Security Analytics with AI and ML
Dec 17, 2024The Dirt on ROT Data
Dec 18, 2024