Vulnerabilities / Threats
05:51 AM

5 Lessons From The FBI Insider Threat Program

Finding ways to improve enterprise insider theft detection and deterrence

SAN FRANCISCO -- RSA CONFERENCE 2013 -- Insider threats may not have garnered the same sexy headlines that APTs did at this year's RSA Conference. But two presenters with the Federal Bureau of Investigation (FBI) swung the spotlight back onto insiders during a session this week that offered enterprise security practitioners some lessons learned at the agency after more than a decade of fine-tuning its efforts to sniff out malicious insiders following the fallout from the disastrous Robert Hanssen espionage case.

RSA Conference 2013
Click here for more articles.

1. Insider threats are not hackers.
Often people think of the most dangerous insiders being hackers who are running special technology tools on internal networks. Not so, says Patrick Reidy, CISO for the FBI.

"You're dealing with authorized users doing authorized things for malicious purposes," he says. "In fact, going over 20 years of espionage cases, none of those involve people having to do something like run hacking tools or escalate their privileges for purposes of espionage."

Reidy says that just less than a quarter of insider incidents tracked on a yearly basis come at the hand of accidental insiders, or what he calls the "knucklehead problem." However, at the FBI his insider threat team spends 35 percent of their time dealing with these problems. He believes the FBI and other organizations should be looking for ways to "automate out of this problem set" by focusing on better user education. Dropping those simpler incidents gives insider threat teams more time to concentrate on the more complex problem of malicious insiders, he says.

2. Insider threat is not a technical or "cybersecurity" issue alone.
Unlike many other issues in information assurance, the risk from insider threats is not a technical problem, but a people-centric problem, says Kate Randal, insider threat analyst and lead researcher for the FBI.

"So you have to look for a people centric solution," she says. "People are multidimensional, so what you have to do is take a multidisciplinary approach."

This starts by focusing efforts on identifying and looking at your internal people, your likely enemies, and the data that would be at risk. In particular, understanding who your people really are should be examined from three important informational angles: cyber, contextual, and psychosocial.

"The combination of these three things is what's most powerful about this methodology," Randal says. "In an ideal world we'd want to collect as much about these areas [as possible], but that's never going to happen. So what's important is adopting a method working with your legal and managerial departments to figure out what works best within the limitations of your environment."

3. A good insider threat program should focus on deterrence, not detection.
For a time the FBI put its back into coming up with predictive analytics to help predict insider behavior prior to malicious activity. Rather than coming up with a powerful tool to stop criminals before they did damage, the FBI ended up with a system that was statistically worse than random at ferreting out bad behavior. Compared to the predictive capabilities of Punxsutawney Phil, the groundhog of Groundhog Day, that system did a worse job of predicting malicious insider activity, Reidy says.

"We would have done better hiring Punxsutawney Phil and waving him in front of someone and saying, 'Is this an insider or not an insider?'" he says.

Rather than getting wrapped up in prediction or detection, he believes organizations should start first with deterrence.

"We have to create an environment in which it is really difficult or not comfortable to be an insider," he says, explaining that the FBI has done this in a number of ways, including crowdsourcing security by allowing users to encrypt their own data, classify their own data, and come up with better ways to protect data. Additionally, the agency has found ways to create "rumble strips" in the road to let users know that the agency has these types of policies in place and that their interaction with data is being used.

4. Detection of insider threats has to use behavioral-based techniques.
Following the failure to develop effective predictive analytics, the FBI moved toward a behavioral detection methodology that has proved far more effective, Reidy says. The idea is to detect insider bad behavior closer to that "tipping point" of when a good employee goes rogue.

"We look at how people operate on the system, how they look contextually, and try to build baselines and look for those anomalies," he says.

Whatever analytics an organization uses, whether it is print file behavior or data around file interactions, Reidy recommends a minimum of six months of baseline data prior to even attempting any detection analysis.

"Even if all you can measure is the telemetry to look at prints from a print server, you can look at things like what's the volume, how many and how big are the files, and how often do they do print," he says

5. The science of insider threat detection and deterrence is in its infancy.
According to Randal, it was bad science that led the FBI to the point where they were using a worse than random predictive analysis. Part of the issue is that even now the science of insider detection and deterrence is still in its infancy. One of the issues with its slow growth is that much of the existing research just focuses on looking at data from the bad guys.

"So what the FBI has done is to really try to push this diagnostic approach of collecting data from and comparing it between a group of known bad and a group of assumed good [insiders] and try to apply that methodology to those three realms [cyber, contextual and psychosocial]."

In particular, some of the research the FBI has done with regard to psychosocial diagnostic indicators has been a bit surprising, she says.

"What we learned from this study is that some of the things we thought would be the most diagnostic in terms of disgruntlement or workplace issues really weren't that much," she says, explaining that more innate psychological risk factors come into play. For example, stress from a divorce, inability to work in a team environment, and exhibiting behaviors of retaliatory behavior all scored high as risk indicators when comparing the bad insiders with the good.

While enterprises will not be able to do the same kind of psychological screening that the FBI does with its employees, there are ways to incorporate this knowledge into insider prevention programs.

"You can try to elicit this information from other avenues: observables, behavioral manifestations, making supervisors more aware of the insider threat problem, and creating an environment where they may be more willing to report some of these things as they see them," she says. "One of the best resources that your security program has is the collaboration of the HR department."

Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Strategist
3/4/2013 | 6:59:30 PM
re: 5 Lessons From The FBI Insider Threat Program
There's a lot of buzz about law enforcement not sharing, so it's good to see FBI experts providing their expertise here. Any readers want to weigh in on whether this info is useful to their insider threat programs?

Kelly Jackson Higgins, Senior Editor, Dark Reading-
Register for Dark Reading Newsletters
White Papers
Current Issue
5 Security Technologies to Watch in 2017
Emerging tools and services promise to make a difference this year. Are they on your company's list?
Flash Poll
New Best Practices for Secure App Development
New Best Practices for Secure App Development
The transition from DevOps to SecDevOps is combining with the move toward cloud computing to create new challenges - and new opportunities - for the information security team. Download this report, to learn about the new best practices for secure application development.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
Published: 2015-10-15
The Direct Rendering Manager (DRM) subsystem in the Linux kernel through 4.x mishandles requests for Graphics Execution Manager (GEM) objects, which allows context-dependent attackers to cause a denial of service (memory consumption) via an application that processes graphics data, as demonstrated b...

Published: 2015-10-15
netstat in IBM AIX 5.3, 6.1, and 7.1 and VIOS 2.2.x, when a fibre channel adapter is used, allows local users to gain privileges via unspecified vectors.

Published: 2015-10-15
Cross-site request forgery (CSRF) vulnerability in eXtplorer before 2.1.8 allows remote attackers to hijack the authentication of arbitrary users for requests that execute PHP code.

Published: 2015-10-15
Directory traversal vulnerability in QNAP QTS before 4.1.4 build 0910 and 4.2.x before 4.2.0 RC2 build 0910, when AFP is enabled, allows remote attackers to read or write to arbitrary files by leveraging access to an OS X (1) user or (2) guest account.

Published: 2015-10-15
Cisco Application Policy Infrastructure Controller (APIC) 1.1j allows local users to gain privileges via vectors involving addition of an SSH key, aka Bug ID CSCuw46076.

Dark Reading Radio
Archived Dark Reading Radio
In past years, security researchers have discovered ways to hack cars, medical devices, automated teller machines, and many other targets. Dark Reading Executive Editor Kelly Jackson Higgins hosts researcher Samy Kamkar and Levi Gundert, vice president of threat intelligence at Recorded Future, to discuss some of 2016's most unusual and creative hacks by white hats, and what these new vulnerabilities might mean for the coming year.