Will 2014 be the year of change for the security industry? Not if we continue to approach information security in the same ways we have for the past three decades. No, it’s time to move beyond the hyperbolic claims of next-generation security. To address current threats and to reduce risk, we require empirical data and now generation technology.
Today, right now, we can significantly reduce risk by using big data threat analytics, and by analyzing security products based on empirical data and practical deployment use cases. In this way, organizations can better understand the limits of their current security infrastructure. Here are some examples of where we are and where we need to go.
Dynamic, not static, risk assessment
The way in which we assess true risk and apply security countermeasures has become predictable and static. Security products that are deployed do not differ much across different industry sectors. Additionally, security budgets are cyclical. Strategies are often based on historical information. Risk continues to be measured as a snapshot in time, and this significantly increases the time to threat detection and protection. There is no silver bullet that guarantees 100 percent protection, but moving from static risk assessment to dynamic risk assessment will allow us to begin modeling risk that is variable at any given point in nearly real-time.
Dynamic risk assessment requires us to examine risk from multiple angles by leveraging big data analytics. With the correct approach and key data points, well-known algorithms can be applied across multiple key indicators to accurately predict and forecast threats against an organization. You might consider this a far-fetched claim, but it’s not.
Variable risk: a new way to bake a cake
A few years ago, an executive from a Fortune 500 organization observed, “We have all the ingredients to make a cake, but we lack the ability to bake it.” The comment challenged me to rethink our approach to security and strive for a bold alternative. To do this, it was necessary to move beyond the comfort zone of industry-dictated security best-practices and approaches to reducing risk. We can continue to throw money at various point products that might close temporary risk-gaps resulting from recent breaches. We could, conversely, utilize a variable risk model whereby accurate information across multiple indicators provides the data necessary to purchase and deploy security solutions that significantly reduce risk.
In this model, we use multiple top-level indicators to establish a variable-risk score. This requires some work, but a security net with gaps is ineffectual. With our new model we can significantly reduce the gaps with accurate information plugged into a new risk equation that offers a pragmatic approach to addressing risk: Attack Surface (Threat Intelligence) + Threat Modeling = Variable Risk.
- Attack Surface: This differs radically depending on industry vertical, geolocation, and amount of the information technology (IT) department budget. The attack surface is the operating system and the applications that are targeted by the adversary. It includes common desktop environments, mobile devices, and bring-your-own-device (BYOD). The extent to which these key indicators can be inventoried is a critical factor in tailoring security that is prescriptive for an organization.
- Threat Intelligence: This describes the multiple threat feeds that provide near real-time intelligence on valid known and unknown malware, vulnerabilities, and exploits. Key to this intelligence is finding out the type of attack vector being used, and which operating systems and applications are vulnerable. Other key indicators that offer detection and protection are the dropped file name, command and control IP Address, URL, country code, and severity of the vulnerability.
- Threat Modeling: This provides the ability to model known threats that are able to bypass current security products as they apply to an attack surface. This includes intrusion prevention systems (IPS), next generation firewalls (NGFW), secure web gateways (SWG), web application firewalls (WAF), antivirus (AV), and breach detection systems (BDS).
There should be a clear understanding of the limitations of an organization’s security infrastructure as well as the time required to detect threats and protect against them. This knowledge will allow the organization to address the true risk to its environment. It will also assist when the organization seeks to renew or replace a security vendor. Although this type of data is available today, it is static and typically tested with known vulnerabilities. Live threat modeling, however, allows for dynamic testing that takes into account threats that have not yet been named. This information is valuable in calculating variable risk.
- Variable Risk Rating: This provides the true measure of risk to an environment at any given point in time.
Today’s security environment is dynamic and complicated. The threat landscape and the attack surface are constantly changing. Every organization will experience patient zero (the first victim, i.e., the endpoint, when an organization is breached). The ability to reduce the time to detection and prevention is crucial in mitigating a breach.
The variable risk model eliminates the signal-to-noise ratio by focusing on what really matters in an environment. To remain competitive and secure in today’s global environment, organizations require a tailored approach specific to their attack surfaces. Waiting for next-generation security products to become the status quo only increases an organization’s chances of becoming a statistic or of discovering too late that it has been one for the past 16 to 18 months.
John Pirc is research vice president at NSS Labs. He is a noted security intelligence and cybercrime expert, an author, and a renowned speaker, with more than 15 years of experience across all areas of security.