Sound insider threat detection programs combine contextual data and a thorough knowledge of employee roles and behaviors to pinpoint the biggest risks.

David A. Sanders, Director of Insider Threat Operations at Haystax

April 2, 2020

6 Min Read

How does a company with 25,000 employees and a four-person insider threat team detect and mitigate its insider threats? Or, to put the math more simply, how does one analyst continuously monitor and run cases on 6,250 employees? The short answer is: They can't.

Chief security officers and chief information security officers are challenged by tightening budgets, staffing shortages, and increasingly stringent insider threat program requirements from government customers, even as they face board and/or shareholder pressure to prevent threats to the organization's personnel, finances, systems, data, and reputation.

What's the solution? The only logical answer is: by laser-focusing only on the highest-risk employees — that is, those in positions of trust who are most likely to perpetrate fraud, information disclosure, workplace violence, espionage, sabotage, or other adverse events.

I recommend the following four-step approach to identifying and deterring high-risk insiders.

Step 1. Use all available data to establish context — early.
Context is critical to the analysis process. When analysts see an alert, or group of alerts, they ask five questions:

  1. "Who" is this person? What is their role and what are they working on? Are they a user with privileged access? Have there been past security incidents?

  2. "What?" What device is the person using? Was company IP or customer data involved?

  3. "Where?" What is their physical location (office, VPN, on travel, coffee shop)?

  4. "When?" Sunday afternoon or after hours during the workweek?

  5. "Why?" Is the activity work-related and within the scope of their role and project? Has the person done it before? Do others with similar roles do this?

User and entity behavior analytics (UEBA) tools can provide some context such as name, title, start date, status, department, location, manager, and watchlists, which may indicate access levels or high-risk activity. However, these attributes typically are used to trigger elevated risk scores only when specific technical activity occurs.

Other contextual data that companies should consider obtaining are onboarding records, work history, work patterns, travel and expense records, badge and printer logs, performance ratings, and training records.

Most importantly, contextual data should be available at the beginning of the analytical process so high-risk users can be identified straight away. Then all subsequent analytical activity can be focused on them, rather than on the never-ending stream of alerts concerning low-risk insiders.

Step 2. Identify high-risk insiders based on access and roles.
There are broad groups of insiders who can be ruled as potentially high-risk (executives, enterprise, and database administrators) while others remain low-risk (recruiters, marketing employees, and communications staff) based on their levels of access and roles.

Risk levels also can vary among employees with similar access and roles. Consider that within the finance department there is a small group of employees (Group A) that is directly engaged in compiling consolidated financial reports. Meanwhile, Group B has limited access as it prepares isolated subsets of information for the reports. Group A clearly poses greater risk of illegally disclosing information than Group B. There may even be an administrative assistant outside of either group who has access to the reports before publication in order to print them, elevating their risk level above others in the same role.

Step 3. Gather and evaluate behavioral indicators.
Malicious insiders often develop tactics and techniques to overcome limitations on their place in an organization and their level of access. Edward Snowden is an example. But even Snowden exhibited observable indicators that in hindsight and taken together could (and should) have raised alarms.

The following behaviors may indicate increased potential for insider risk:

  • Security incident history

  • Behavioral problems

  • Substantiated HR/ethics cases

  • Attendance issues

  • Unusual leave patterns

  • Foreign travel/contacts

  • Unusual hours

  • Reports of violence outside work

  • Alcohol/illegal drug abuse

  • Threatens company, manager, co-worker, customer

  • Feels under-appreciated/underpaid

  • Change in demeanor

  • Refuses work assignment

  • Disengages from team

  • Confrontation with co-worker/manager

  • Negative social media posts

  • Financial issues

  • Arrests

  • Policy violations

  • Data exfiltration

  • Accessing high-risk websites

  • Sudden deletion of files

  • Unauthorized access attempts

The gathering and use of evidence for these behaviors can be a very delicate matter for some companies, if not completely off-limits. That said, the goal is to provide critical behavioral indicators that inform the risk model described in Step 4.

Step 4. Develop a model for risk scoring based on context and behaviors.
User contextual data from Step 1, insider roles and access levels identified in Step 2, and the behavioral indicators gathered in Step 3 all need to be evaluated in a model that is purpose-built to assess and prioritize insider risk.

I have found that the most effective analytic approach is to employ a probabilistic model developed in collaboration with diverse subject-matter experts to identify high-risk individuals.

The model is essentially a risk baseline that represents the combined knowledge of subject matter experts in security, psychology, fraud, counterintelligence, IT network activity, etc. Each model node represents behaviors and stressors that, when broken down into their most basic elements, are measurable in data that can be applied as evidence to the model.

The model's outputs are risk scores for each individual, continuously updated as new data becomes available. It is vital that the model also provide transparency through its entire chain of reasoning, and that personally identifiable information be masked so that individual privacy is protected.

With the right types of data — not just from network monitoring systems but also including the behavioral indicators and open source data sources listed above — the highest-risk insiders will become quickly apparent.

Conclusion
Any sound insider threat mitigation program requires a combination of policies, processes, and technologies — and the right leadership to communicate and drive program implementation across the enterprise.

Even with all the right pieces in place, however, the program should not be only about hunting down bad actors. On the contrary, once high-risk users are identified — and assuming they haven't done anything illegal — companies should proactively engage with them, working collaboratively to reduce their risk and get them back to using their full talents and energies.

After all, there's a reason they were entrusted with insider access in the first place.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's featured story: " How to Evict Attackers Living Off Your Land."

About the Author(s)

David A. Sanders

Director of Insider Threat Operations at Haystax

David A. Sanders is Director of Insider Threat Operations at Haystax, a business unit of Fishtech Group, where he is responsible for deploying the Haystax Insider Threat Mitigation Suite to the company's enterprise and public-sector clients and supporting the optimization of their existing insider threat programs. David has two decades of experience in program and project management, software development and database design, including eight years as a trailblazer in the development and implementation of advanced insider threat mitigation programs. Prior to joining Haystax, he spent five years designing and managing the insider threat program at Harris Corporation, now L3Harris Technologies. He also served on the U.S. government's National Insider Threat Task Force (NITTF). David holds a Bachelor of Science degree from Virginia Tech and a Master of Science degree from George Mason University.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights