Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

04:00 PM
Connect Directly
E-Mail vvv

Prioritizing High-Risk Assets: A 4-Step Approach to Mitigating Insider Threats

Sound insider threat detection programs combine contextual data and a thorough knowledge of employee roles and behaviors to pinpoint the biggest risks.

How does a company with 25,000 employees and a four-person insider threat team detect and mitigate its insider threats? Or, to put the math more simply, how does one analyst continuously monitor and run cases on 6,250 employees? The short answer is: They can't.

Chief security officers and chief information security officers are challenged by tightening budgets, staffing shortages, and increasingly stringent insider threat program requirements from government customers, even as they face board and/or shareholder pressure to prevent threats to the organization's personnel, finances, systems, data, and reputation.

What's the solution? The only logical answer is: by laser-focusing only on the highest-risk employees — that is, those in positions of trust who are most likely to perpetrate fraud, information disclosure, workplace violence, espionage, sabotage, or other adverse events.

I recommend the following four-step approach to identifying and deterring high-risk insiders.

Step 1. Use all available data to establish context — early.
Context is critical to the analysis process. When analysts see an alert, or group of alerts, they ask five questions:

  1. "Who" is this person? What is their role and what are they working on? Are they a user with privileged access? Have there been past security incidents?
  2. "What?" What device is the person using? Was company IP or customer data involved?
  3. "Where?" What is their physical location (office, VPN, on travel, coffee shop)?
  4. "When?" Sunday afternoon or after hours during the workweek?
  5. "Why?" Is the activity work-related and within the scope of their role and project? Has the person done it before? Do others with similar roles do this?

User and entity behavior analytics (UEBA) tools can provide some context such as name, title, start date, status, department, location, manager, and watchlists, which may indicate access levels or high-risk activity. However, these attributes typically are used to trigger elevated risk scores only when specific technical activity occurs.

Other contextual data that companies should consider obtaining are onboarding records, work history, work patterns, travel and expense records, badge and printer logs, performance ratings, and training records.

Most importantly, contextual data should be available at the beginning of the analytical process so high-risk users can be identified straight away. Then all subsequent analytical activity can be focused on them, rather than on the never-ending stream of alerts concerning low-risk insiders.

Step 2. Identify high-risk insiders based on access and roles.
There are broad groups of insiders who can be ruled as potentially high-risk (executives, enterprise, and database administrators) while others remain low-risk (recruiters, marketing employees, and communications staff) based on their levels of access and roles.

Risk levels also can vary among employees with similar access and roles. Consider that within the finance department there is a small group of employees (Group A) that is directly engaged in compiling consolidated financial reports. Meanwhile, Group B has limited access as it prepares isolated subsets of information for the reports. Group A clearly poses greater risk of illegally disclosing information than Group B. There may even be an administrative assistant outside of either group who has access to the reports before publication in order to print them, elevating their risk level above others in the same role.

Step 3. Gather and evaluate behavioral indicators.
Malicious insiders often develop tactics and techniques to overcome limitations on their place in an organization and their level of access. Edward Snowden is an example. But even Snowden exhibited observable indicators that in hindsight and taken together could (and should) have raised alarms.

The following behaviors may indicate increased potential for insider risk:

  • Security incident history
  • Behavioral problems
  • Substantiated HR/ethics cases
  • Attendance issues
  • Unusual leave patterns
  • Foreign travel/contacts
  • Unusual hours
  • Reports of violence outside work
  • Alcohol/illegal drug abuse
  • Threatens company, manager, co-worker, customer
  • Feels under-appreciated/underpaid
  • Change in demeanor
  • Refuses work assignment
  • Disengages from team
  • Confrontation with co-worker/manager
  • Negative social media posts
  • Financial issues
  • Arrests
  • Policy violations
  • Data exfiltration
  • Accessing high-risk websites
  • Sudden deletion of files
  • Unauthorized access attempts

The gathering and use of evidence for these behaviors can be a very delicate matter for some companies, if not completely off-limits. That said, the goal is to provide critical behavioral indicators that inform the risk model described in Step 4.

Step 4. Develop a model for risk scoring based on context and behaviors.
User contextual data from Step 1, insider roles and access levels identified in Step 2, and the behavioral indicators gathered in Step 3 all need to be evaluated in a model that is purpose-built to assess and prioritize insider risk.

I have found that the most effective analytic approach is to employ a probabilistic model developed in collaboration with diverse subject-matter experts to identify high-risk individuals.

The model is essentially a risk baseline that represents the combined knowledge of subject matter experts in security, psychology, fraud, counterintelligence, IT network activity, etc. Each model node represents behaviors and stressors that, when broken down into their most basic elements, are measurable in data that can be applied as evidence to the model.

The model's outputs are risk scores for each individual, continuously updated as new data becomes available. It is vital that the model also provide transparency through its entire chain of reasoning, and that personally identifiable information be masked so that individual privacy is protected.

With the right types of data — not just from network monitoring systems but also including the behavioral indicators and open source data sources listed above — the highest-risk insiders will become quickly apparent.

Any sound insider threat mitigation program requires a combination of policies, processes, and technologies — and the right leadership to communicate and drive program implementation across the enterprise.

Even with all the right pieces in place, however, the program should not be only about hunting down bad actors. On the contrary, once high-risk users are identified — and assuming they haven't done anything illegal — companies should proactively engage with them, working collaboratively to reduce their risk and get them back to using their full talents and energies.

After all, there's a reason they were entrusted with insider access in the first place.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's featured story: " How to Evict Attackers Living Off Your Land."

David A. Sanders is Director of Insider Threat Operations at Haystax, a business unit of Fishtech Group, where he is responsible for deploying the Haystax Insider Threat Mitigation Suite to the company's enterprise and public-sector clients and supporting the optimization of ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/17/2020
Cybersecurity Bounces Back, but Talent Still Absent
Simone Petrella, Chief Executive Officer, CyberVista,  9/16/2020
Meet the Computer Scientist Who Helped Push for Paper Ballots
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/16/2020
Register for Dark Reading Newsletters
White Papers
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2020-09-19
An issue was discovered in Tiny Tiny RSS (aka tt-rss) before 2020-09-16. The cached_url feature mishandles JavaScript inside an SVG document.
PUBLISHED: 2020-09-19
** DISPUTED ** Typesetter CMS 5.x through 5.1 allows admins to upload and execute arbitrary PHP code via a .php file inside a ZIP archive. NOTE: the vendor disputes the significance of this report because "admins are considered trustworthy"; however, the behavior "contradicts our secu...
PUBLISHED: 2020-09-19
An issue was discovered in the sized-chunks crate through 0.6.2 for Rust. In the Chunk implementation, the array size is not checked when constructed with unit().
PUBLISHED: 2020-09-19
An issue was discovered in the sized-chunks crate through 0.6.2 for Rust. In the Chunk implementation, the array size is not checked when constructed with pair().
PUBLISHED: 2020-09-19
An issue was discovered in the sized-chunks crate through 0.6.2 for Rust. In the Chunk implementation, the array size is not checked when constructed with From<InlineArray<A, T>>.