Ex-NSA Director Rogers: Insider Threat Prevention a 'Contract'
Ret. Admiral Michael Rogers – who served as head of the NSA and the US Cyber Command from 2014 to 2018 – on how to handle the risk of insiders exposing an organization's sensitive data.
March 26, 2019
The Edward Snowden case in 2013 ushered in a new era of the insider threat that left many businesses wondering how they could prevent a determined user or contractor hell-bent on stealing or leaking secrets from doing so if the National Security Agency (NSA) could not.
Snowden – who pilfered and leaked troves of spy agency secrets and elite hacking tools to select media outlets in what he considered a whistle-blowing act to expose government overreach – became an instant cautionary tale on how to prevent and detect rogue or careless user activity. Yet many organizations today still continue to struggle with insider threat strategies and managing user privileges and access to data.
In an interview with Dark Reading earlier this month in San Francisco, (Ret.) Admiral Michael Rogers, who served as director of the NSA and commander of the US Cyber Command after the 2013 retirement of Gen. Keith Alexander in the wake of the Snowden storm, shared his vision for the best approach to thwarting insider mistakes and attacks.
A major lesson for government and industry from the Snowden incident was that you have to get ahead of careless or malicious insider behavior, according to Rogers, who left his government posts last year and currently serves as an adviser to cybersecurity think-tank Team8.
"No. 1: Who has access to what data; what [access do they] need? That's very important," Rogers said. "And No. 2, understanding your population. For us in the government – at the NSA – it was uniform military, [civilians], and contractors. We had to build a strategy for three distinct populations in the workforce that sometimes operate with slightly different rules, slightly different responsibilities. What works for one doesn't necessarily work for the other."
Another lesson from the Snowden case was it's not simply a matter of limiting your contractors' access to data but, rather, all users, he said. "If you look at the history of insider challenges, it really reaches every demographic," Rogers said.
The key is to understand user behavior on and off your network that could signal potential for stress or risk, he explained. Stressed users can become security risks.
That doesn't mean monitoring users' activities outside of work. But there are some red flags that could signal trouble for insider threat activity. For example, if the user has engaged in actions that indicate higher risks or problems, such as a criminal act, Rogers said, that could raise the risk of that user leaking or mishandling organization data.
It's about getting ahead of careless or malicious insider behavior. "We need to get better at predicting behavior," Rogers said.
Even some of the more obvious signs often get overlooked or dismissed. An employee looking over another's shoulder at work or asking for his logins or passwords should be a red flag, for instance, he said. "These are all things I've actually seen happen, but no one said anything" at the time, Rogers said.
However, subjecting users to overly intense scrutiny can backfire, he noted. It's a balance: "It's not security for security's sake," Rogers said. "And it should be in a way that does not violate employees' rights."
Organizations must protect the data they consider their competitive advantage. "Or like in our case [at NSA], it was the responsibility to make sure it didn't fall into the wrong hands," he said. "That control was also central. Not everybody in NSA had access to all the data; we had control for only those who needed it."
Giving users access only to the data they actually need to do their jobs is one of the key best practices to data protection and insider threat protection. But that's still not currently a widely adopted practice.
"That takes work," Rogers said. "A good, effective insider threat strategy requires a commitment on the part of the organization as a whole."
Emerging technologies like artificial intelligence (AI) and machine learning can help. "AI and machine learning have great applicability here. In my experience, most organizations have actually access to more data then they truly understand, and they're not optimized to use it," he said.
Users will make mistakes, Rogers said. The key is to incentivize them to avoid security missteps. "It's not about hammering them [for their missteps], but 'it's we work as a team to maximize security and efficiency ... and we respect you as an individual'" that also plays a key role in protecting the organization's valuable information, Rogers explained.
Rogers dismisses heavy-handed or onerous security and user policies that don't bring in the users as part of the discussion. "How do we engage in an insider threat strategy that doesn't drive people away?" he said. "We want [users] to know what we're doing. And we want to learn from them: What would be effective and resonate for you as a user? What, on the other hand, would be a disincentive?'"
This gives users a stake in the security of the organization and its data. "I also believe in having a really frank discussion. There is a level of responsibility here – we acknowledge that. That responsibility can vary, but fundamentally if we're giving [the user] access ... there's a responsibility to ensure [its security]," Rogers said. "It's like a contract."
And that requires buy-in. "It takes time, it takes resources, and it's about [establishing a] culture," added Rogers, who later headlined an insider threat event held by Dtex Systems.
Related Content:
Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry's most knowledgeable IT security experts. Check out the Interop agenda here.
About the Author
You May Also Like
Cybersecurity Day: How to Automate Security Analytics with AI and ML
Dec 17, 2024The Dirt on ROT Data
Dec 18, 2024