At its most basic, a consistent and meaningful insider threat detection program has two components: data and people. Here’s how to put them together.

Scott Weber, Managing Director, Stroz Friedberg

June 1, 2015

5 Min Read

It’s no secret that your organization – like any other -- has data that can help reveal when an employee could be at risk and potentially pose an inside threat.  Getting the right information and forming the right working group of professionals to evaluate it is something that is sometimes overlooked, but is a very critical component of an insider threat detection program.

Individuals who have been given access to a company's networks and facilities, including employees, are in the best position to bring serious damage to the organization. Unlike a distant hacker plotting an attack from the other side of the world, insiders likely have much easier access to your firm, other employees and sensitive information. The diverse risks they pose include: espionage and IP theft, sexual misconduct, sabotage, and workplace violence. Edward Snowden is just one example, albeit perhaps the most notorious.

The statistics are quite disconcerting. Ninety-three percent of U.S. organizations believe that they are vulnerable to insider threats, according to Vormetric’s 2015 Insider Threat Report, and in fact 60% of polled companies have reported some type of attempt to steal proprietary information. Further, theft of trade secrets has cost businesses $250 billion per year, a figure that is expected to double in the next decade.

The breadth and severity of this threat, and a company’s responsibility to contain it, is unmatched. Board members and CEOs, through their CISOs, CIOs, CSOs, HR executives, and compliance professionals, all are obligated to maintain control of their organization -- even when employees number in the tens of thousands around the world.

But what many companies don’t realize is that they already have much of the information they need to do this. With the right insider threat detection and prevention programs, organizations can not only minimize risks, they can pre-emptively prevent many insider attacks.

At its most basic, a consistent and meaningful program has two parts: data and people.

The data portion begins with technical risk indicators, the most common form of enhanced insider threat tracking in use today. These traditional tools, such as data loss prevention and security information and event management (SIEM) software, spot potentially illicit activities in progress and in the recent past by identifying anomalies in a person’s use of technology. For example, those tools will detect and provide an alert if a person is copying numerous files through remote access at 3:00 a.m. Specialized forms of these tools also exist for tracking specific types of misconduct, such as fraud or insider trading.

Less common in the infosec toolkit are non-network, personal behavioral risk indicators. These are forward-looking metrics that track and assess an individual’s psychological propensity to carry out an attack. Exploration into the psychology of language, known as psycholinguistic analysis, has been used for decades to reveal a person’s motivation, his or her stressors and his or her propensity to act.  Today, psycholinguistic analysis can be used to identify indicators in digital communications. Emails and chats do not have to be poured over one by one. Rather the analysis can happen in bulk. Word choice and the frequency of word-use can be analyzed across a body of communications to statistically track dozens of behavioral risk indicators at once. Analysts can then detect shifts in behavior, alerting them when someone might be a risk.

A multidisciplinary team

The people are the second critical piece of the program. Executives from IT, information security, physical security, human resources, and legal should meet regularly, as a multidisciplinary insider threat review team, to examine the various risk indicators and any relevant anecdotal evidence. Information security can detect any concerning data behavior and anomalous activity on the network. HR can report if anyone has voiced recent complaints or concerns about an individual or social group. Physical security can check building access logs and refresh pre-employment background checks.  It’s also appropriate to involve someone directly responsible for supervision of the individuals in question.

Further analysis of the various data and information collected can assist the team in their efforts to make sense of the internal risk landscape. The Critical Pathway to Insider Risk, developed by researchers and investigators sponsored by the Department of Defense, Defense Personnel Security Research Center, Carnegie Mellon’s Insider Threat Team, and affiliated behavioral scientists, can assign a risk score to an individual in question based on the behavioral data. Over time, a person’s score on this scale can be compared to him/herself, their department, or the company average on a global, regional or local scale--as well as against insiders that have acted out in the past.

Using the Critical Pathway, the team can determine the organization’s best response to a potential threat; how an organization reacts to an insider threat can either prevent an attack or provoke one. Often, a high-risk individual will be on the brink of attack, but will only launch into action after an ill-planned intervention, such as an abrupt firing. This kind of “maladaptive organizational response” can be avoided when the multidisciplinary group carefully considers all of the sensitivities of a high-risk case. The goal is not just mitigation, but prevention.

Cyber security specialists often say attacks are unavoidable; it’s “when, not if.” But most insider threats are different. Organizations have the data and the management expertise to catch many attacks before they occur or escalate. And, with this ability comes the responsibility to use it wisely. Harm to the organization is harm to everybody who derives their living from it, shareholders and the public at large.

About the Author(s)

Scott Weber

Managing Director, Stroz Friedberg

Scott Weber is a Stroz Friedberg Managing Director based in the New York office. He is responsible for the firm's technology and advisory services involving the application of advanced psycholinguistic algorithms to Big Data. Mr. Weber assists clients in extracting value through predictive analytics, natural language processing, social media exploitation and business intelligence for myriad purposes, including, augmenting investigations with insightful information, proactively helping organizations better detect, prevent, manage and mitigate insider threats and security risks, and providing insight into organizational health and other human resources related issues.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights