Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Careers & People

3/1/2019
10:30 AM
Ira Winkler
Ira Winkler
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
100%
0%

Security Experts, Not Users, Are the Weakest Link

CISOs: Stop abdicating responsibility for problems with users - it's part of your job.

There are countless articles, conference speakers, panelists, and casual conversations among IT and security personnel lamenting that users are the weakest link in security. The claim is that no matter how well you secure your organization, it takes just one user to ruin everything. While there's no doubt that a user can take down these "experts'" networks, the problem lies not with the user, but with the experts.

As I wrote in my previous column, user actions are expected and, most importantly, enabled by security staff. The problem with the expression "the users are the weakest link" is that it abdicates responsibility for stopping problems. Security professionals may believe that they did everything they could, but they're really just giving up.

All a Part of the System
Here is what's critical: Users are a part of the system. They are not accessories. They serve a business function that requires interaction with your organization's computer systems. To determine that a part of the system — users — will always be insecure and there is nothing that you can do about it is a failure on your part.

Consider just about any other discipline within an organization. Accounting has processes in place to deal with the expected human actions involving financial mistakes and malfeasance. You do not hear CFOs declare that they can't keep accurate financial records, because users are the weakest link. COOs don't say their organizations can't run effectively, because they have humans involved in operations. Any CFO or COO who made such a claim would be rightfully fired, because they are responsible for their processes, which have humans as a critical part of those processes and they must figure out how to effectively manage those people.

CISOs who cannot figure out how to effectively manage humans using systems they are responsible for protecting should be disciplined, if not fired, for proclaiming they are failing to deal with a critical aspect of their systems. Just as systems have to be designed to protect from the expected external hacking attacks, they must be designed to protect from expected user actions.

One critical aspect is that security professionals seem to believe that the solution to deal with human mistakes — and remember this doesn't deal with intentional malicious actions — is awareness training. But the reality is that although awareness training can be valuable, it is not perfect. This reliance on an imperfect countermeasure is behind the negligence in proclaiming users the weakest link.

Security professionals must realize that while awareness reduces the risk, their job is not finished. First we must consider that most awareness programs are poor. From experience, observation, and research, most awareness programs are not achieving their desired goals in creating strong security behaviors. Even assuming they could, security professionals would still need to create comprehensive programs that implement the supporting processes and technical countermeasures. This would account for both the inevitable user error as well as the malicious actions.

However, instead of security professionals acknowledging that they have failed to account for expected user failings or malfeasance, they blame the user. That is unacceptable.

While one my previous columns described the need for a human security officer to address the users from a comprehensive perspective, in short, you need to have a process in place that looks at potential user failings regarding:

  • What are critical processes or likely areas where users can create damage?
  • Analyzing and improving the processes to remove user decision-making, or specifying how decisions should be made, if they cannot be removed.
  • Implementing technology that prevents the opportunities for users to cause damage, as well as technology that mitigates damages if proactive measures don't work.
  • Developing awareness programs that focus on informing users how to make decisions and do their jobs according to the established processes.

Just as CFOs and COOs cannot simply state that the user is the weakest link to justify failures in the processes that they oversee, the CISO cannot blame users for failures in security processes. The user is an embedded component of organizational computer systems, and it is negligent not to put in a set of comprehensive countermeasures to prevent, detect, and mitigate the anticipated failings of that component.  

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry's most knowledgeable IT security experts. Check out the Interop agenda here.

Ira Winkler is president of Secure Mentem and author of Advanced Persistent Security. View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
REISEN1955
0%
100%
REISEN1955,
User Rank: Ninja
3/1/2019 | 10:43:08 AM
Different category
This is argumentative if you place a user on the same platform as a security expert.  They are entirely different animals.  Some users, no matter how much education is thrown at them - and we need more of that - listen up Security experts - do NOT get it ever.  They won't.  Why?  A thousand reasons, most ignorant of the tech stuff and some just live that way.  Security experts know more than users of course but have a different realm of responsbility.  WE know not to click on an attached invoice - but put a whitelist or watchlist in front of us and we are on a different planet.  No - we are both either as weak or strong as we choose to be. 
Luna Tsee
50%
50%
Luna Tsee,
User Rank: Apprentice
3/4/2019 | 2:15:19 PM
Who is at fault?

The analogy may not be perfect but it does make a valid point. Largely the issue of security is not one of blame, though there is some.

The point is simply the people responsible for security are patching holes in a bad design. Remote user identity is a problem. And until the password as the primary identity method is replaced with a better one, the problem will remain.

Enforcing security tools like 2FA, 2SA, Recapta, long and cryptic passwords and other requirements is making the user responsible for securing the system. Thus users must expend extra efforts and conform to content producers' requirements (dongles, RSA keys, smartcards, Upper case, lower case, special symbols, at least 13 characters, but no space, or non keyboard symbols) to conduct Internet intercourse makes the user "Prove Who They Are'. That's because the systems are not yet sophisticated enough to tell a human from a machine the real you from a 'clone'.

We have driver's licenses, passports, Charge Cards. None of these places such an extrordinary level of participation in identity.

It should not be the user's responsibility to secure the systems they interact with.

That's the problem. We can argue -Who's to blame: the User or CISO? all day but that doesn't solve the problem.

REISEN1955
50%
50%
REISEN1955,
User Rank: Ninja
3/5/2019 | 3:41:41 PM
Re: Who is at fault?
Secure password - use a SHA256 hash - easy to find if you have the noted file on another computer - extremely good security but hell to work with.  Supercalifragileisticexpealidocious also works with alternative charactter approach as does it backwards. 
Luna Tsee
50%
50%
Luna Tsee,
User Rank: Apprentice
3/4/2019 | 2:21:10 PM
Who is at fault?

The analogy may not be perfect but it does make a valid point. Largely the issue of security is not one of blame, though there is some.

The point is simply the people responsible for security are patching holes in a bad design. Remote user identity is a problem. And until the password as the primary identity method is replaced with a better one, the problem will remain.

Enforcing security tools like 2FA, 2SA, Recapta, long and cryptic passwords and other requirements is making the user responsible for securing the system. Thus users must expend extra efforts and conform to content producers' requirements (dongles, RSA keys, smartcards, Upper case, lower case, special symbols, at least 13 characters, but no space, or non keyboard symbols) to conduct Internet intercourse makes the user "Prove Who They Are'. That's because the systems are not yet sophisticated enough to tell a human from a machine the real you from a 'clone'.

We have driver's licenses, passports, Charge Cards. None of these places such an extrordinary level of participation in identity.

It should not be the user's responsibility to secure the systems they interact with.

That's the problem. We can argue -Who's to blame: the User or CISO? all day but that doesn't solve the problem.

paul.dittrich
100%
0%
paul.dittrich,
User Rank: Strategist
3/1/2019 | 11:28:44 AM
A badly-flawed analogy
Using CFO or COO as an example of effectively managing people-based security risks is a flawed analogy and very unfair to CISOs / CIOs.

The CFO has a very strong and extensive set of detailed legal and regulatory requirements which codify many years (centuries?) of experience countering bad actors both internal and external.  The solutions are well known and CFOs enjoy a very high success rate of preventing or quickly detecting problems.

The COO has an alphabet soup of groups (again, based on many years of experience) to provide detailed guidance on how to mandate "safe" working conditions and processes.  And the COO is very unlikely to worry about a single human crippling or even destroying the entire company.

The CISO/CIO has much weaker legal / regulatory support to handle a threat landscape which is still rapidly evolving.  And their worst nightmare is a single user who either willfully or accidentally causes a major problem - a breach or an outage.

Nor do the CFO and COO have to worry about a CEO who never ever says "No" to a developer.....
Thor7077
67%
33%
Thor7077,
User Rank: Apprentice
3/1/2019 | 12:15:04 PM
Must be a user
This sounds like a user with a bad experience blaming all the cyber security experts around them. It's like a patient upset their doctor isnt specialized to treat every problem they have just because they are a 'doctor'. As you have foot doctors, you don't call them a bad doctor because they don't know how to perform heart surgery. Cyber folks aren't your one size fits all fix either, they have their strengths/specializations in some areas that just can't fix the whole entire architectures problems. This article was nothing more than a finger to point
REISEN1955
50%
50%
REISEN1955,
User Rank: Ninja
3/4/2019 | 10:47:23 AM
Re: Must be a user!!!
i deleted my secondary comment as it was unfair but i am with a major firm in a malware forensics unit and deal with users all day long.  My real feeling is that to compare and lump users in with security pro is an unfair mirror.  Both have issues but entirely different.  i do wish that security pros would be allowed or advocated by the C-Suite to educate users more than they do. 
paul.dittrich
0%
100%
paul.dittrich,
User Rank: Strategist
3/1/2019 | 12:39:52 PM
Re: A badly-flawed analogy
Please see paragraph 4 of the original column.  Both CFO and COO are used as examples.
J3R3
100%
0%
J3R3,
User Rank: Apprentice
3/1/2019 | 1:23:17 PM
Flawed Analogy
If CFO's and COO's did not view users as the weak link in accounting and operations processes then there would not need to be consequences for users failing to follow the appropriate processes. There would also not need to be separation of duties, or cross training, or mandatory vacations. We could completely do away with audit departments and all of those government oversight positions would not exist. We have all of these things because CFO's and COO's have always known that users are the weakest link and these are the detective and preventative controls that have been created to reduce the risk inherent with having humans as employees. 

Mr. Winkler seems to be misunderstanding the meaning of "users are the weakest link."
jeffmaley
50%
50%
jeffmaley,
User Rank: Strategist
3/1/2019 | 1:45:33 PM
This column is flawed.
Users are absolutely the weakest link. You say that any COO or CFO that said that would be fired, but what we're seeing is a consistent trend towardsa automation, removing the user. Everyone agrees that people are flawed and the things they do are flawed. Pretending that's not the case is ignoring the obvious data to the contrary and waving away a valid problem. 

Your proposed solution is literally every mature information security management system. You've put forth nothing new or innovative and are instead rehashing old ideas, ideas that are tried and true but could certainly be improved. I agree that awareness programs do not go far enough, but the solution isn't to give up, the solution is to make them better. Instead of awareness programs, we should be using evangelical programs. To reach the user community, we need to do a better job of encouraging, educating, and making them cognizant of their involvement in the process. If we're excited about security, we can help them be excited about security, too.
BradleyRoss
50%
50%
BradleyRoss,
User Rank: Moderator
3/3/2019 | 7:55:47 PM
water is wet
If security management management is saying that they can't provide security as long as somebody on the system might open a phishing email, they should be fired and the ashes distributed as a warning to future management. Part of the idea of least privilege, two man rules, and other similar techniques is to limit the ability of a single action by a user to compromise the system. There should also be limits on what users can access from outside a controlled environment.

If you manage a computer system, you have to assume that two or three of the software tools are completely compromised, and you don't know which components are damaged. It's called eliminating single point of failure vulnerabilities, and identifying sections of fault trees where two or three problems can cause a disaster. Look at the design methods for nuclear reactors, aircraft, automobiles, and the handling of toxic materials.

Security is expensive

Security is not convenient

Security requires you to think

Live with it or it will kill you 
BradleyRoss
100%
0%
BradleyRoss,
User Rank: Moderator
3/3/2019 | 8:10:23 PM
Water is Wet
If security management management is saying that they can't provide security as long as somebody on the system might open a phishing email, they should be fired and the ashes distributed as a warning to future management.  Part of the idea of least privilege, two man rules, and other similar techniques is to limit the ability of a single action by a user to compromise the system.  There should also be limits on what users can access from outside a controlled environment.

If you manage a computer system, you have to assume that two or three of the software tools are completely compromised, and you don't know which xomponents are damaged.  It's called eliminating single point of failure vulnerabilities and identifying sections of fault trees where two or three problems can xause a disaster.  Look at the design methoss for nuclear reactors, aircxraft, automobiles, and the handling of toxic materials.

Security is expensive

Security is not convenient

Security requires you to think and understand

Live with it or it will kill you
7 Truths About BEC Scams
Ericka Chickowski, Contributing Writer,  6/13/2019
DNS Firewalls Could Prevent Billions in Losses to Cybercrime
Curtis Franklin Jr., Senior Editor at Dark Reading,  6/13/2019
Can Your Patching Strategy Keep Up with the Demands of Open Source?
Tim Mackey, Principal Security Strategist, CyRC, at Synopsys,  6/18/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Building and Managing an IT Security Operations Program
As cyber threats grow, many organizations are building security operations centers (SOCs) to improve their defenses. In this Tech Digest you will learn tips on how to get the most out of a SOC in your organization - and what to do if you can't afford to build one.
Flash Poll
The State of IT Operations and Cybersecurity Operations
The State of IT Operations and Cybersecurity Operations
Your enterprise's cyber risk may depend upon the relationship between the IT team and the security team. Heres some insight on what's working and what isn't in the data center.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-3896
PUBLISHED: 2019-06-19
A double-free can happen in idr_remove_all() in lib/idr.c in the Linux kernel 2.6 branch. An unprivileged local attacker can use this flaw for a privilege escalation or for a system crash and a denial of service (DoS).
CVE-2019-3954
PUBLISHED: 2019-06-19
Stack-based buffer overflow in Advantech WebAccess/SCADA 8.4.0 allows a remote, unauthenticated attacker to execute arbitrary code by sending a crafted IOCTL 81024 RPC call.
CVE-2019-10085
PUBLISHED: 2019-06-19
In Apache Allura prior to 1.11.0, a vulnerability exists for stored XSS on the user dropdown selector when creating or editing tickets. The XSS executes when a user engages with that dropdown on that page.
CVE-2019-11038
PUBLISHED: 2019-06-19
When using gdImageCreateFromXbm() function of gd extension in versions 7.1.x below 7.1.30, 7.2.x below 7.2.19 and 7.3.x below 7.3.6, it is possible to supply data that will cause the function to use the value of uninitialized variable. This may lead to disclosing contents of the stack that has been ...
CVE-2019-11039
PUBLISHED: 2019-06-19
Function iconv_mime_decode_headers() in versions 7.1.x below 7.1.30, 7.2.x below 7.2.19 and 7.3.x below 7.3.6 may perform out-of-buffer read due to integer overflow when parsing MIME headers. This may lead to information disclosure or crash.