Federal agencies that work with classified info given guidance on rooting out bad actors -- but profiling potential bad actors isn't so simple, experts say

An Office of Management and Budget (OMB) memo sent this week to federal agencies that handle classified information has stirred debate over the ability to root out potential insider threats before they leak or do other harm with or to sensitive data. The post-WikiLeaks incident memo from the Obama administration recommends, among other things, that these agencies monitor individual employees' behaviors -- and even their states of mind.

But security experts argue that spotting the next Bradley Manning or some malicious insider hell-bent on leaking or selling government secrets isn't so cut-and-dried. Not all rogue employees look or act the same, so some might slip through the cracks while innocent ones can be falsely identified as threats. Profiling users isn't something most enterprises can necessarily do, whereas it might be more accepted within a military or highly classified agency, experts say.

False-positives, unfortunately, do happen, says Eric Cole, security analyst with Secure Anchor Consulting and author of several books on insider threats. "What's dangerous here is when it's not 100 percent valid. Just because someone is behind on their alimony or child payments doesn't mean they are going to harm the country," he says. But financial problems is one of the red flags often used to ferret out potential insider threats, he says.

The OMB memo, which was first reported by MSNBC, was a follow-on to a Nov. 28 directive by the OMB for agencies to review how they secure classified data. They must complete their assessments by Jan. 28, wrote Jacob Lew, director of the OMB, in the Jan. 3 memo to agencies.

The memo includes a very specific checklist for implementing policies for preventing the leak of classified information, including monitoring employee behavioral patterns and psyches. The so-called Initial Agency Self-Assessment Program for User Access to Classified Information in Automated Systems includes several items associated with profiling the demeanor of an employee in order to determine his or her "trustworthiness," including whether the agency uses a psychiatrist or sociologist "to measure relative happiness as a means to gauge trustworthiness" and to measure "despondence and grumpiness as a means to gauge waning trustworthiness."

User behavior was also cited in such questions as: "Have you conducted a trend analysis of indicators and activities of the employee population which may indicate risky habits or cultural and societal differences other than those expected for candidates (to include current employees) for security clearances?"

Security analysts say the emphasis should be on access control to the data, not on Bob's bad mood. John Kindervag, senior analyst with Forrester Research, says Manning himself didn't fit the bill as typical threat. "The chat logs from him show he wasn't malicious in the way we'd think. He did go outside policy, but he [appeared to be] guided by conscience, not by financial gain," Kindervag says. "So was he trustworthy or not? Not with data [as it turned out]."

So it's not really about a user's trustworthiness, he says, but whether certain data should be accessible to him and whether he really needs it to do his job. "The only way to stop that leak is not gauging the trustworthiness of Bradley Manning … but does Bradley Manning need access to the diplomatic cables describing Moammar Gadhafi's Eastern European nurse? If not, he [should not be able] to reach that data," Kindervag says.

The feds traditionally have not done a good job preventing insider leaks, experts say. They don't really have an insider threat program in place, Cole says. "They know they have a problem. WikiLeaks is one of the many examples showing that they have a major issue," he says."They need to recognize that the way they track controls and treat data no longer scales [such as with] an ACL list … it doesn't work because data is portable now," he says.

The feds need to limit user access to classified and sensitive data, and to run data leakage protection technology where the data sits rather than at the server level, he says.

"If you look at WikiLeaks, you will see that none of the basic measures were in place to prevent it," says Ken Ammon, chief strategy officer at Xceedium. "If you look at the memo, you don't get a strong sense that there's a rapid movement to look at near-term technology deployments to get better practices in place, which is the area that I'd be most concerned about. Much of what you see in the memorandum is already existing security policy and requirements that the government has in place. It is just testing the agencies and departments to see if they have complied with those policies and requirements."

Ammon says agencies instead should focus more on a "zero-trust" access model. "Agencies will stand the best chance of catching malicious action and intent prior to loss when they enforce zero-trust access control and log, audit, and investigate infringements of acceptable behavior," he says.

The Zero Trust Model proposed by Forrester's Kindervag last year -- basically a trust no one, monitor everyone approach -- contends that inside users are no more trusted than outsiders, and organizations should inspect all traffic in real-time, from the outside and on the inside.

Attempting to measure a user's trustworthiness might not be a perfect science, but it can provide some useful indicators, says Alex Hutton, principal on research and intelligence for the Verizon Business RISK team. "In the IT realm, trust is the inverse of risk. When you say, 'I trust this person,' there's a low level of risk in this relationship," Hutton says. Rooting out potential insider threats based on human qualities and patterns, if deployed, must go hand-in-hand with the proper controls to protect classified information, he says.

"In IT risk assessment, we don't normally" factor in these criminology-type assessments, he notes.

And if employed, monitoring and evaluating individuals has to be done in a consistent and fair way, Secure Anchor's Cole notes. "Unless you have probable cause, you can't isolate one person from another" this way, he says.

Plus it's just not realistic to monitor each person's every move on the network. "There's no way it's going to scale," Cole says. "You'd need to create a list of criteria, so that if you [the user] do these things, then you will be marked as a suspicious entity. Then you will be monitored in this fashion."

The bigger threat? "The accidental insider threat -- who is not intentionally doing harm, but inadvertently did through his actions," Cole says.

Have a comment on this story? Please click "Discuss" below. If you'd like to contact Dark Reading's editors directly, send us a message.

About the Author(s)

Kelly Jackson Higgins, Editor-in-Chief, Dark Reading

Kelly Jackson Higgins is the Editor-in-Chief of Dark Reading. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise Magazine, Virginia Business magazine, and other major media properties. Jackson Higgins was recently selected as one of the Top 10 Cybersecurity Journalists in the US, and named as one of Folio's 2019 Top Women in Media. She began her career as a sports writer in the Washington, DC metropolitan area, and earned her BA at William & Mary. Follow her on Twitter @kjhiggins.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like

More Insights