Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Risk

3/26/2019
09:30 AM
Connect Directly
Google+
Twitter
RSS
E-Mail
50%
50%

Ex-NSA Director Rogers: Insider Threat Prevention a 'Contract'

Ret. Admiral Michael Rogers - who served as head of the NSA and the US Cyber Command from 2014 to 2018 - on how to handle the risk of insiders exposing an organization's sensitive data.

The Edward Snowden case in 2013 ushered in a new era of the insider threat that left many businesses wondering how they could prevent a determined user or contractor hell-bent on stealing or leaking secrets from doing so if the National Security Agency (NSA) could not.

Snowden – who pilfered and leaked troves of spy agency secrets and elite hacking tools to select media outlets in what he considered a whistle-blowing act to expose government overreach – became an instant cautionary tale on how to prevent and detect rogue or careless user activity. Yet many organizations today still continue to struggle with insider threat strategies and managing user privileges and access to data.

In an interview with Dark Reading earlier this month in San Francisco, (Ret.) Admiral Michael Rogers, who served as director of the NSA and commander of the US Cyber Command after the 2013 retirement of Gen. Keith Alexander in the wake of the Snowden storm, shared his vision for the best approach to thwarting insider mistakes and attacks. 

A major lesson for government and industry from the Snowden incident was that you have to get ahead of careless or malicious insider behavior, according to Rogers, who left his government posts last year and currently serves as an adviser to cybersecurity think-tank Team8.

"No. 1: Who has access to what data; what [access do they] need? That's very important," Rogers said. "And No. 2, understanding your population. For us in the government – at the NSA – it was uniform military, [civilians], and contractors. We had to build a strategy for three distinct populations in the workforce that sometimes operate with slightly different rules, slightly different responsibilities. What works for one doesn't necessarily work for the other."

Another lesson from the Snowden case was it's not simply a matter of limiting your contractors' access to data but, rather, all users, he said. "If you look at the history of insider challenges, it really reaches every demographic," Rogers said.

The key is to understand user behavior on and off your network that could signal potential for stress or risk, he explained. Stressed users can become security risks.

That doesn't mean monitoring users' activities outside of work. But there are some red flags that could signal trouble for insider threat activity. For example, if the user has engaged in actions that indicate higher risks or problems, such as a criminal act, Rogers said, that could raise the risk of that user leaking or mishandling organization data.  

It's about getting ahead of careless or malicious insider behavior. "We need to get better at predicting behavior," Rogers said.

Even some of the more obvious signs often get overlooked or dismissed. An employee looking over another's shoulder at work or asking for his logins or passwords should be a red flag, for instance, he said. "These are all things I've actually seen happen, but no one said anything" at the time, Rogers said.

However, subjecting users to overly intense scrutiny can backfire, he noted. It's a balance: "It's not security for security's sake," Rogers said. "And it should be in a way that does not violate employees' rights."

Organizations must protect the data they consider their competitive advantage. "Or like in our case [at NSA], it was the responsibility to make sure it didn't fall into the wrong hands," he said. "That control was also central. Not everybody in NSA had access to all the data; we had control for only those who needed it."

Giving users access only to the data they actually need to do their jobs is one of the key best practices to data protection and insider threat protection. But that's still not currently a widely adopted practice.

"That takes work," Rogers said. "A good, effective insider threat strategy requires a commitment on the part of the organization as a whole."

(Ret.) Admiral Michael Rogers
(Ret.) Admiral Michael Rogers

Emerging technologies like artificial intelligence (AI) and machine learning can help. "AI and machine learning have great applicability here. In my experience, most organizations have actually access to more data then they truly understand, and they're not optimized to use it," he said.

Users will make mistakes, Rogers said. The key is to incentivize them to avoid security missteps. "It's not about hammering them [for their missteps], but 'it's we work as a team to maximize security and efficiency ... and we respect you as an individual'" that also plays a key role in protecting the organization's valuable information, Rogers explained.

Rogers dismisses heavy-handed or onerous security and user policies that don't bring in the users as part of the discussion. "How do we engage in an insider threat strategy that doesn't drive people away?" he said. "We want [users] to know what we're doing. And we want to learn from them: What would be effective and resonate for you as a user? What, on the other hand, would be a disincentive?'"  

This gives users a stake in the security of the organization and its data. "I also believe in having a really frank discussion. There is a level of responsibility here – we acknowledge that. That responsibility can vary, but fundamentally if we're giving [the user] access ... there's a responsibility to ensure [its security]," Rogers said. "It's like a contract."

And that requires buy-in. "It takes time, it takes resources, and it's about [establishing a] culture," added Rogers, who later headlined an insider threat event held by Dtex Systems.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry's most knowledgeable IT security experts. Check out the Interop agenda here.

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
tdsan
50%
50%
tdsan,
User Rank: Ninja
4/12/2019 | 9:11:11 AM
Re: Learn from mistakes
Learn from your mistakes. I agree with that, but this has happened numerous times across the board at different time frames so it is not that they need to learn, the question we need to ask, when something of significance is built and brought to the attention of executive management, why do they allow the same thing to happen over and over again (malicious misuse of power). Congress knew about the building of these devices/solutions and they (Congress) asked the management staff to remove the controls (allow the system to monitor US citizens). I am not sure how they are learning if they keep doing the same thing over and over again but at this point it will continue to happen no matter who is in office.

 

T
EdwardThirlwall
50%
50%
EdwardThirlwall,
User Rank: Apprentice
4/12/2019 | 12:50:42 AM
Learn from mistakes
We all learn from past mistakes and in this scenario, the affected large organisations have to do just that. If that major data leak did not happen, they would have no clue that such a critical internal breach could even occur in the first place. Buck up and hope for a brighter future!
tdsan
0%
100%
tdsan,
User Rank: Ninja
3/28/2019 | 1:11:13 PM
Re: Stopping insider threat - Perimeter Security
I love the post about the three year old grand-daughter and perimeter security. But the thing that she did not have access to was to classified material, the material did not affect the lives of hundreds of millions of people where their rights were being violated (3, 4, 5 amendments). So if the question was posed to her after numerous human and privacy rights were violated, would she give the same response as Snowden (Prism), Thomas Drake (TrailBlazer), William "Bill" Binney (ThinThread) and now Pegasus (FBI used to access Apple Phones). I am not sure how I would compare giving badges back to something that affects people all over the world, but to your point, go figure.

But anyway, to the point made by San Francisco, (Ret.) Admiral Michael Rogers:

▲Who has access to what data; what [access do they] need?...What works for one doesn't necessarily work for the other."

→ I agree with this statement wholeheartedly, I do think that information for one group does not necessarily work for the other, but if we were to look at the Snowden situation, who says it would not have come from military personnel (Thomas Drake - Airforce and Navy Veteran- was a member of the military who was found guilty of the Esponiage act but was later exonerated of all charges). This person brought information to his chain of command and was told to ignore it (to violate privacy rights and continue to overstep the bounds of government authority). Thomas Drake was a hero to the military and to people around the world but he was arrested and treated like a second-class citizen, go figure.

▲ The key is to understand user behavior on and off your network that could signal potential for stress or risk, he explained. Stressed users can become security risks.

→ So we go around validating a person's stress levels at work where they are already under stress, I am not sure how that will help because in an organization like this, when you walk in the door, you feel a level of stress due to the high-level of responsibility and the nature of work expected from this organization. Will people have to wear health braclets to detemine if their stress level is high or will a form of AI come into play to monitor human vital signs, not sure but we will take that under advisement?

▲ "Or like in our case [at NSA], it was the responsibility to make sure it didn't fall into the wrong hands," he said. "That control was also central. Not everybody in NSA had access to all the data; we had control for only those who needed it."

→ Interesting, so how did Edward Snowden gain access to classified material where he plugged in a thumbdrive to a classified system in Hawaii and extracted documents from various sources that talked about how data would be processed from Points of Presence and ISPs across the globe? Also, if I am not mistaken, wasn't the general on watch when Shadow Brokers accessed their internal network (NSA) and was able to publish on a website the tools the NSA used to access systems ranging from Windows, Linux, Routers, etc. It is interesting that someone tries to give advice on securing an environment where they themselves have been hacked and their information has been monitized for attacks against the US, I am not sure if this person should be providing any advice.

▲ Users will make mistakes, Rogers said. The key is to incentivize them to avoid security missteps. "It's not about hammering them [for their missteps], but 'it's we work as a team to maximize security and efficiency ... and we respect you as an individual'" that also plays a key role in protecting the organization's valuable information, Rogers explained.

→ I do agree with this statement, that we should be working together as a team but the problem that is paramount in business or governmental environments stems from most people are not willing to listen (just go with the flow). The General made a great point but the people around him may not feel the same way (it could be due to their training or just unwillingness to work with others, some instances it stems from doing things things from an authoritarian standpoint (Remember Thomas Drake went to his superior and stated that this practice was just wrong (4th Amendment - Surveilance responsibility), they walked him out and tried to prosecute him. William "Bill" Binney went to his higher ups and stated that he did not feel comfortable with releasing the internal application locks that in were in place to protect American citizens from being spyed on (ThinThread), they came to his house and address him at gunpoint in his shower).

▲ This gives users a stake in the security of the organization and its data. "I also believe in having a really frank discussion. There is a level of responsibility here – we acknowledge that. That responsibility can vary, but fundamentally if we're giving [the user] access ... there's a responsibility to ensure [its security]," Rogers said. "It's like a contract."

→ Interesting, a contract is based on three things -> Terms/Conditions, Money/Value, Parties. Ok, from a TC (Terms and Conditions) point, there needs to be language to protect the American people and not violate their rights as citizens. There needs to be language stating that if this information is brought to your attention, there will be open conversation and dialogue instead of being made to agree while looking down the barrel of a gun. Money - if the person is working, if they are getting a reasonable rate, then that would be considered the value side relating to the contract (the higher the clearance, the higher the rate, based on responsibility). Parties - Either party should be able to discuss the concerns without being criticized or fired or removed from their post (place of employment), but in most instances that I described, they did this very thing and were punished for their actions to people who served their country (malicious use of power).

I do agree that we should take responsibility to secure environments and protect vital government data, but lets not loose sight of the big picture, these individuals were shunned/punished because they wanted to protect the rights of American citizens. They should be looked at as heros, we need to review the controls that we have in place so that this treatment is not realized ever again especially for people who felt that this overstepped the bounds of the US Constitution.

T

 

 

 
REISEN1955
50%
50%
REISEN1955,
User Rank: Ninja
3/26/2019 | 1:41:19 PM
Re: Stopping insider threat - Perimeter Security
Some time ago, my then 3 year old grand-daughter, Cariana, came to visit work with her mother and my wife.  She loved Pizza in the cafeteria and said hello to my colleagues.  Of course all three had visitor badges and when leaving, this little person took all three badges and said ' They have to be returned." !!!!!   She walked to the security desk and handed them in.  She was almost adopted on the spot.  Lesson: Even this 3 year old got the concept of perimeter security BETTER than some employees do.   Go Figure.

Update: true she did not have access to corporate data and computer skills, while good, are lacking at age 3.  I would give her 5 or 6 before she becomes a true hacker. LOL.  The main thing is that she GOT a security concept that was basic and true. 
ealmer
50%
50%
ealmer,
User Rank: Author
3/26/2019 | 10:52:12 AM
Stopping insider threat
Good stuff!

A smart CISO I worked with once had set tripwires on sensitive data (at a cellular operator this is an obvious class of data) and would pick up the phone and call any user touching data they were not supposed to touch as close as possible to real time to ask them whether they really meant to do that. 

It was usually enough to ensure that the specific user (and everybody who knew them) did not ever touch sensitive data again. It may not have deterred criminals that were already determined to steal data, but most insider threats develop over time and are a combination of the level of temptation the data poses combined with the ease of getting it.

Plugging obvious holes - technical and behavioral - helps keep borderline honest people honest. 

 

 
Kelly Jackson Higgins
50%
50%
Kelly Jackson Higgins,
User Rank: Strategist
3/26/2019 | 10:23:30 AM
Re: All too true
Adm. Rogers had some interesting insight into this topic, for sure. 
REISEN1955
50%
50%
REISEN1955,
User Rank: Ninja
3/26/2019 | 10:14:08 AM
All too true
Excellent article in every respect and the Admiral hits every point succinct and sound.  Access to ONLY data users need to do their job is key security rule number 1 ..... but many times it is bypassed for just EVERYTHING and that is that.   Users want to be educated but often are not.  Central controls are fine but indiv entry points through users is also key.  I also remember when epoxy would have worked great on secure computers to prevent insertion of a USB key.  Just SEAL IT UP.  Done, closed - now worry about something else.   TSA searches work sometimes too for very very secure systems.    Spot on article. 
The Problem with Proprietary Testing: NSS Labs vs. CrowdStrike
Brian Monkman, Executive Director at NetSecOPEN,  7/19/2019
How Attackers Infiltrate the Supply Chain & What to Do About It
Shay Nahari, Head of Red-Team Services at CyberArk,  7/16/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Building and Managing an IT Security Operations Program
As cyber threats grow, many organizations are building security operations centers (SOCs) to improve their defenses. In this Tech Digest you will learn tips on how to get the most out of a SOC in your organization - and what to do if you can't afford to build one.
Flash Poll
The State of IT Operations and Cybersecurity Operations
The State of IT Operations and Cybersecurity Operations
Your enterprise's cyber risk may depend upon the relationship between the IT team and the security team. Heres some insight on what's working and what isn't in the data center.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-10102
PUBLISHED: 2019-07-22
The Linux Foundation ONOS 1.15.0 and ealier is affected by: Improper Input Validation. The impact is: The attacker can remotely execute any commands by sending malicious http request to the controller. The component is: Method runJavaCompiler in YangLiveCompilerManager.java. The attack vector is: ne...
CVE-2019-10102
PUBLISHED: 2019-07-22
Frog CMS 1.1 is affected by: Cross Site Scripting (XSS). The impact is: Cookie stealing, Alert pop-up on page, Redirecting to another phishing site, Executing browser exploits. The component is: Snippets.
CVE-2019-10102
PUBLISHED: 2019-07-22
Ilias 5.3 before 5.3.12; 5.2 before 5.2.21 is affected by: Cross Site Scripting (XSS) - CWE-79 Type 2: Stored XSS (or Persistent). The impact is: Execute code in the victim's browser. The component is: Assessment / TestQuestionPool. The attack vector is: Cloze Test Text gap (attacker) / Corrections ...
CVE-2019-9959
PUBLISHED: 2019-07-22
The JPXStream::init function in Poppler 0.78.0 and earlier doesn't check for negative values of stream length, leading to an Integer Overflow, thereby making it possible to allocate a large memory chunk on the heap, with a size controlled by an attacker, as demonstrated by pdftocairo.
CVE-2019-4236
PUBLISHED: 2019-07-22
A IBM Spectrum Protect 7.l client backup or archive operation running for an HP-UX VxFS object is silently skipping Access Control List (ACL) entries from backup or archive if there are more than twelve ACL entries associated with the object in total. As a result, it could allow a local attacker to ...