Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Application Security

07:30 AM
Dan Reis
Dan Reis
Dan Reis

Improving on Layered Cyber Defense

At what point does a layered defense encumber security's proficiency as it increases network complexity?

For years the security industry has recommended a layered defense to protect against cyber threats. This is the practice of deploying multiple security products at various control points throughout a network. The intention is to broaden threat detection and prevention capabilities in a network to increase the chance of detection as well as to gather detailed information on system and network activity for ongoing threat analysis. The practice involves multiple security products deployed on each system as well as throughout a network and its segments. The downside to this is that a layered defense tends to have overlapped features, duplicate capabilities and are more complex which may impact processing resources, network bandwidth and management overhead. The number of systems also increases the volume of traffic generated by security applications, edge devices such as firewalls, intrusion detection, SIEM and other security systems. And the combined systems can generate volumes of application, system, log, network and other data that can impact staff responsiveness and available time to evaluate network activity.

The question isn't whether well-executed layering offers value. At issue is the point at which a layered defense may encumber security's proficiency as it increases network complexity. Layering may burden a network through its multiple systems that generate volumes of conflicting and duplicate data that can mushroom a network's workload, impacting performance and resource provisioning. Because many security systems don't share data they act as information silos, which can hinder the availability of important information. This can limit staff in their capacity to evaluate network or security operations, uncover threats or pinpoint areas of vulnerability. Silos can create a canopy over vital information that can impede analysis and hamper staff ability to assemble the variety of information needed to conduct comprehensive investigation, mitigation and response activities.

Traditional security utilizes an 'outside-in' methodology
Many security products tend to utilize "outside-in" orientated protection schemes. That is, they monitor application or system processing results in order to identify a threat or suspicious activity. These solutions tend to review and compare application output against a defined output baseline to detect threat activity. A sophisticated attack can happen during application processing, making it captive to attacker direction. And, a competent attacker will know how to disguise their actions and modify an applications output, remove or mask evidence or remnants of their activity so an application and its output would appear normal.

There are tens of thousands of different security products from thousands of security vendors. Each vendor offers multiple products with unique menus and operational capabilities. The products all differ in ability to gather, identify, analyze, detect, prevent and share threat information. They also demonstrate a wide variation in threat efficacy, even when accessing the same information. All these product variations strain organizations' ability to select, configure and deploy security solutions in an optimum fashion. And the mix of security and network systems deployed together means every company builds a custom network environment. The mix is further complicated when including local and remote devices, owned and hosted data centers and cloud computing. Also, the data that applications produce and process will be accessed and stored between multiple local and remote locations. Furthermore, every network is also subject to dynamic and continual modification through the addition of new systems, patching, updating or upgrading existing systems along with changes to address new or emerging threats.

Adding more layers of security to large, growing and complex networks can negatively impact staff capacity to maintain a secure environment. The traffic volume generated from each new layer requires significant staff time to parse and correlate into digestible amounts for analysis and investigation. A layered defense, as a concept, makes sense, however; because of the issues it can create, simply adding another layer in every situation is problematic as a methodology for keeping a growing network secure.

The intelligent application defense
A core requirement of security is to protect key applications and their data processing activities. Many industries identify their own key application processes and develop methodologies to facilitate their protection. An example methodology is the instrumentation of critical systems with embedded local sensors as a part of their core functionality. This is done in many areas, for instance in aircraft flight and power systems, automobile drive and safety systems, factory environments and other complex systems. An instrumented system will have both physical and software sensors integrated into its normal operation. The integrated sensors will monitor and gather operational detail, feeding the data to an instrumentation management system for analysis and response. A software application can employ the same methodology by using embedded software sensors tied into an instrument management system. This would comprise multiple sensors embedded into an application to provide dynamic process monitoring and data gathering with any correlation and analysis conducted by a management system. This would provide active threat review and response during application process activity instead of after threat actions have already completed. Because proactive sensors can detect and prevent threat activity in process, they can circumvent its actions and stop it from damaging or compromising an application.

The implementation of eEffective protection schemes against today's sophisticated threats requires intimate access to applications processing attributes. According to Jeff Williams, co-founder and CTO of Contrast Security, "The best way to secure an application or API is by instrumenting it with sensors that can directly monitor application execution and report behavior that reveals vulnerabilities and exploit attempts. Traditional approaches like scanning running applications and analyzing source code simply don't have the full context of the running application. This lack of context leads to overwhelming errors to deal with, both false positives and false negatives. The instrumentation approach relies on embedding sensors in an application that load when the application code loads and operate as an intimate part of the application itself. The security sensors are infused into an application's processing activity so that the instrumentation is used to monitor and analyze every transaction, track all data flows, and verify each path of executed code. This approach provides instant and accurate feedback to developers so that they can fix vulnerabilities in stride and check in clean code. Because it's fully distributed, this approach also scales effectively, running in parallel across any number of applications."

Reduced complexity and increased protection
Most layered security today is based on after-the-fact detection technology. To bring layering into the twenty-first century requires monitoring and response capabilities built into an application. Utilizing embedded sensors and application instrumentation can generate the intimate knowledge required for a more proactive and early stage threat identification and protection structure. This allows staff to deploy defense layers where they matter and where they can derive optimal protection where data is processed, without increasing complexity. The next phase in layered defense is for deeper engagement within every application, in order to remove the risk involved in processing organizations' vital information.

— Dan Reis, SecurityNow Expert

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/21/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2020-09-23
An issue was discovered in Xen through 4.14.x. x86 PV guest kernels can experience denial of service via SYSENTER. The SYSENTER instruction leaves various state sanitization activities to software. One of Xen's sanitization paths injects a #GP fault, and incorrectly delivers it twice to the guest. T...
PUBLISHED: 2020-09-23
An issue was discovered in Xen through 4.14.x. There is mishandling of the constraint that once-valid event channels may not turn invalid. Logic in the handling of event channel operations in Xen assumes that an event channel, once valid, will not become invalid over the life time of a guest. Howeve...
PUBLISHED: 2020-09-23
An issue was discovered in Xen 4.14.x. There is a missing unlock in the XENMEM_acquire_resource error path. The RCU (Read, Copy, Update) mechanism is a synchronisation primitive. A buggy error path in the XENMEM_acquire_resource exits without releasing an RCU reference, which is conceptually similar...
PUBLISHED: 2020-09-23
An issue was discovered in Xen through 4.14.x. There are evtchn_reset() race conditions. Uses of EVTCHNOP_reset (potentially by a guest on itself) or XEN_DOMCTL_soft_reset (by itself covered by XSA-77) can lead to the violation of various internal assumptions. This may lead to out of bounds memory a...
PUBLISHED: 2020-09-23
An issue was discovered in Xen through 4.14.x. Out of bounds event channels are available to 32-bit x86 domains. The so called 2-level event channel model imposes different limits on the number of usable event channels for 32-bit x86 domains vs 64-bit or Arm (either bitness) ones. 32-bit x86 domains...