Improving on Layered Cyber Defense
At what point does a layered defense encumber security's proficiency as it increases network complexity?
For years the security industry has recommended a layered defense to protect against cyber threats. This is the practice of deploying multiple security products at various control points throughout a network. The intention is to broaden threat detection and prevention capabilities in a network to increase the chance of detection as well as to gather detailed information on system and network activity for ongoing threat analysis. The practice involves multiple security products deployed on each system as well as throughout a network and its segments. The downside to this is that a layered defense tends to have overlapped features, duplicate capabilities and are more complex which may impact processing resources, network bandwidth and management overhead. The number of systems also increases the volume of traffic generated by security applications, edge devices such as firewalls, intrusion detection, SIEM and other security systems. And the combined systems can generate volumes of application, system, log, network and other data that can impact staff responsiveness and available time to evaluate network activity.
The question isn't whether well-executed layering offers value. At issue is the point at which a layered defense may encumber security's proficiency as it increases network complexity. Layering may burden a network through its multiple systems that generate volumes of conflicting and duplicate data that can mushroom a network's workload, impacting performance and resource provisioning. Because many security systems don't share data they act as information silos, which can hinder the availability of important information. This can limit staff in their capacity to evaluate network or security operations, uncover threats or pinpoint areas of vulnerability. Silos can create a canopy over vital information that can impede analysis and hamper staff ability to assemble the variety of information needed to conduct comprehensive investigation, mitigation and response activities.
Traditional security utilizes an 'outside-in' methodology
Many security products tend to utilize "outside-in" orientated protection schemes. That is, they monitor application or system processing results in order to identify a threat or suspicious activity. These solutions tend to review and compare application output against a defined output baseline to detect threat activity. A sophisticated attack can happen during application processing, making it captive to attacker direction. And, a competent attacker will know how to disguise their actions and modify an applications output, remove or mask evidence or remnants of their activity so an application and its output would appear normal.
There are tens of thousands of different security products from thousands of security vendors. Each vendor offers multiple products with unique menus and operational capabilities. The products all differ in ability to gather, identify, analyze, detect, prevent and share threat information. They also demonstrate a wide variation in threat efficacy, even when accessing the same information. All these product variations strain organizations' ability to select, configure and deploy security solutions in an optimum fashion. And the mix of security and network systems deployed together means every company builds a custom network environment. The mix is further complicated when including local and remote devices, owned and hosted data centers and cloud computing. Also, the data that applications produce and process will be accessed and stored between multiple local and remote locations. Furthermore, every network is also subject to dynamic and continual modification through the addition of new systems, patching, updating or upgrading existing systems along with changes to address new or emerging threats.
Adding more layers of security to large, growing and complex networks can negatively impact staff capacity to maintain a secure environment. The traffic volume generated from each new layer requires significant staff time to parse and correlate into digestible amounts for analysis and investigation. A layered defense, as a concept, makes sense, however; because of the issues it can create, simply adding another layer in every situation is problematic as a methodology for keeping a growing network secure.
The intelligent application defense
A core requirement of security is to protect key applications and their data processing activities. Many industries identify their own key application processes and develop methodologies to facilitate their protection. An example methodology is the instrumentation of critical systems with embedded local sensors as a part of their core functionality. This is done in many areas, for instance in aircraft flight and power systems, automobile drive and safety systems, factory environments and other complex systems. An instrumented system will have both physical and software sensors integrated into its normal operation. The integrated sensors will monitor and gather operational detail, feeding the data to an instrumentation management system for analysis and response. A software application can employ the same methodology by using embedded software sensors tied into an instrument management system. This would comprise multiple sensors embedded into an application to provide dynamic process monitoring and data gathering with any correlation and analysis conducted by a management system. This would provide active threat review and response during application process activity instead of after threat actions have already completed. Because proactive sensors can detect and prevent threat activity in process, they can circumvent its actions and stop it from damaging or compromising an application.
The implementation of eEffective protection schemes against today's sophisticated threats requires intimate access to applications processing attributes. According to Jeff Williams, co-founder and CTO of Contrast Security, "The best way to secure an application or API is by instrumenting it with sensors that can directly monitor application execution and report behavior that reveals vulnerabilities and exploit attempts. Traditional approaches like scanning running applications and analyzing source code simply don't have the full context of the running application. This lack of context leads to overwhelming errors to deal with, both false positives and false negatives. The instrumentation approach relies on embedding sensors in an application that load when the application code loads and operate as an intimate part of the application itself. The security sensors are infused into an application's processing activity so that the instrumentation is used to monitor and analyze every transaction, track all data flows, and verify each path of executed code. This approach provides instant and accurate feedback to developers so that they can fix vulnerabilities in stride and check in clean code. Because it's fully distributed, this approach also scales effectively, running in parallel across any number of applications."
Reduced complexity and increased protection
Most layered security today is based on after-the-fact detection technology. To bring layering into the twenty-first century requires monitoring and response capabilities built into an application. Utilizing embedded sensors and application instrumentation can generate the intimate knowledge required for a more proactive and early stage threat identification and protection structure. This allows staff to deploy defense layers where they matter and where they can derive optimal protection where data is processed, without increasing complexity. The next phase in layered defense is for deeper engagement within every application, in order to remove the risk involved in processing organizations' vital information.
— Dan Reis, SecurityNow Expert
Read more about:
Security NowAbout the Author
You May Also Like