Operations // Identity & Access Management
2/4/2014
05:15 PM
Garret Grajek
Garret Grajek
Commentary
Connect Directly
LinkedIn
Google+
RSS
E-Mail
50%
50%
Repost This

The Problem With Two-Factor Authentication

The failure of corporate security strategies to protect personal identity information from hackers resides more with system architecture than with authentication technology. Here's why.

For too long, enterprises have been looking for the perfect two-factor authentication. First, it was X.509, then hard tokens, then SMS, and now Push and biometrics. And still, hackers keep winning. Just look at what happened with Target, Neiman Marcus, Living Social, Snapchat, and others.

The problem isn't the two-factor authentication technology. To be more specific, it's not just the two-factor authentication. It's the full integration, which includes the storage, accessing, validation, and assertion of identity throughout the authentication process.

But don't take my word for it. The forensics on most recent hacks reveal that hackers did not break the authentication mechanism itself. Rather, they broke the integration -- the identity passing and storage. That tells me websites (cloud or enterprise-based) that demand bulletproof security need to understand how authentication (single- or two-factor) is provisioned, conducted, validated from enterprise information, and asserted to the final resource -- and ultimately how the trust is reused at other resources.

How the authentication is provisioned
By this, I mean how the ID itself is granted to the user and how the credentials are provided to the users. The authentication process (single- or two-factor) should be quantified and scrutinized for weaknesses. One of the best ways to increase security in this procedure is to remove all human interaction (think: how to remove help desk interaction). You can validate users based on enterprise data or third-party social IDs and other data sources. You can then grant users reusable two-factor authentication credentials such as an X.509 certificate, an identity card, a mobile OATH token, or just the device itself.

Ideally, the registration process should be browser-based, which enables communication to match the client's native language automatically. Too often, however, each of these functionalities is siloed (e.g., coded after the two-factor product is purchased), and this is where the hacks occur. The hackers are breaching the architecture, not the authentication mechanism.

How and where the authentication takes place
Too often the validation algorithm is hosted or housed on servers or services that are beyond an enterprise's security control. These servers and services need to be scrutinized, because it is usually much easier for a hacker to breach the actual identity collection form (web or other form) than the actual authentication mechanism itself via cross-site scripting, SQL injection, or another attack vector against the form collector. Of course, this raises a raft of additional questions. Who wrote these collection forms? Are they housed on secure, enterprise servers? Have the forms been pen tested? Were they written by an outsourced contract service or hosted on insecure servers?

How the authentication is validated from enterprise PII
Most of the recent attacks have targeted enterprise-held personally identifiable information (PII), in which two-factor authentication methods prompted the company to sync up or migrate PII to other holders. As the Snapchat breach of 4.6 million users' phone numbers demonstrated, organizations need to secure their PII with the same security they use to keep passwords and other authentication information safe. Authentication information is, by its nature, PII, and allowing other services, especially authentication services, to use this data is simply asking for trouble.

How the authenticated identity is validated
Many authentication methods were created before resources like cloud and native mobile applications existed. As a result, common authentication mechanisms, such as tokens, were designed to use dated authentication protocols, like RADIUS, for resource-to-data-store validation. This type of authentication usually assumes that there is a proxy between the resource and the user, which is not always possible in the cloud and with mobile apps. In response, enterprises have implemented hackable integration methodologies that introduce vulnerabilities in the credential collection and identity-passing processes for these new resources.

Cloud resources should be secured with cryptographically signed assertions, like SAML or WS-Fed; similar mechanisms, including cryptographically validated web services, can be used for identity passing to native mobile apps. But these mechanisms are only as good as the services that encompass the identity passing. If the authentication system is separate from the identity-passing system, your enterprise needs to ensure that this transfer process is secure each and every time.

Don't ignore user fatigue
Ideally, all systems that users access (including network, cloud, enterprise, and mobile), should be set up to conduct some sort of identity validation. But if the enterprise forces a high-friction authentication such as SMS, token, or telephony, where the user has to re-enter credentials every session, it's pretty much guaranteed that the user will find a way to circumvent the best mechanisms and/or burden the help desk with repeated account lockouts or two-factor registration requests.

To alleviate this burden, SSO is the best solution. Look at consolidating enterprise resources into mechanisms that lend themselves to portal access where a single authentication (preferably, a strong one) allows access to multiple resources. Role-based is ideal, depending on what resources a particular user should see.

Organizations that demand bulletproof security must understand that true security is not in the authentication process alone. It's only when the entire system architecture -- from provisioning and validation through asserting identity -- is addressed from a security perspective that personal information will be truly safe from attack.

Garret Grajek is a CISSP-certified security engineer with more than 20 years of experience in the information security and authentication space. As Chief Technical Officer and Chief Operating Officer for SecureAuth Corp., Garret is responsible for the company's identity ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Page 1 / 4   >   >>
humlik
50%
50%
humlik,
User Rank: Apprentice
2/13/2014 | 8:42:28 AM
The problem is the authentication technology itself, not the outside architecture
First - thanks for the article, it's always good to open painful topics. I absolutely agree with Andrew's comment. Garret Grajek excellently identified and formulated several Achilles heels of authentication, bud I disagree with the formulation that "The problem is not in authentication"- I would formulate the main idea in the opposite way: "The problem is the authentication technology itself, not the outside architecture!"

Garret in his article correctly observed one architectural misconception. He uncovered that the authentication technology is not just composed of "identity verification act".

Many of you (does not matter if you are customer or developer) may already have noticed that the rest of Devils's hoof is being silently moved onto your shoulders.

And that's wrong. The authentication technology must offer a compact and unbreakable solution for entire life-cycle of your "cybernetic" identity – identity creation, validation/verification, deletion, lost, expiration and much more including ID provisioning!

That's why the US is coming with the NSTIC (National Strategy for Trusted Identities in Cyberspace - http://www.nist.gov/nstic/), why the European Union is coming with the SSEDIC activity (European eID - http://www.eid-ssedic.eu/).

Maybe one interesting information is coming for EU region – the SSEDIC has been completing work on formulating visions of future eID. This work is coming from 3-year SSEDIC analysis of existing authentication technologies and issues. Main principles of that future vision are incorporated into new strategy called DII – Distributed Identity Infrastructure. The final text of recommendation will be released soon.

Welcome to the new Matrix ;)
aaronAshfield
50%
50%
aaronAshfield,
User Rank: Apprentice
2/12/2014 | 12:33:01 AM
Two-Factor Authentication is OBSOLETE
Two-factor authentication is an old concept that applies well to workstations, however, fails to protect data on mobile devices... A simple device left un-attended while open will compromise enterprise data. Presence-based real-time security offered by Secure Access Technologies provides breakthrough security and breakthrough user experience. www.SecureAccessTechnologies.com
WKash
50%
50%
WKash,
User Rank: Apprentice
2/10/2014 | 3:37:37 PM
NIST NSTICk
Garret, what's your take on the work being done at NIST and the National Strategy for Trusted Identities in Cyberspace  in coming up w/ better a better solution?
Marilyn Cohodas
50%
50%
Marilyn Cohodas,
User Rank: Strategist
2/10/2014 | 8:51:13 AM
Re: Two factor is useful after the data breach
I second your point about user acceptance of 2FA. Speaking as typical end-user, I for one, would welcome any relief from the tyranny (and ineffectiveness) of passwords.

(PS Thanks for the disclosure of your relationship with the author's company!)
capsaicin
50%
50%
capsaicin,
User Rank: Apprentice
2/7/2014 | 5:35:40 PM
Re: Two factor is useful after the data breach
I think that is exactly one of the points the author makes. Any authentication mechanism can be breached, thus it is imperative that their are options and flexibility available to easily move to another methodology. We should all accept and expect that over time there will be a breach of a given method, be it the Toopher method, or the Telephony / SMS / Push methods that have gained a lot of traction. When that happens, it needs to be simple to switch to a new methodology very quickly (click of a mouse?) without having to completely recode / rip and replace technology.
RobertW152
100%
0%
RobertW152,
User Rank: Apprentice
2/7/2014 | 4:39:09 PM
Re: Two factor is useful after the data breach
I agree with you completely, and think that the #1 hurdle for 2FA, is acceptance by the end user. I think most organizations gamble that productivity is more important than security until a hack occurrs. If you give your end users a technology that balances Productivity & Security, then organizations will adopt 2FA for themselves and for other user groups like contactors, customers, Before it's too late. Disclosure: I work for SecureAuth, the author of this post's company.
IMjustinkern
100%
0%
IMjustinkern,
User Rank: Strategist
2/7/2014 | 3:59:05 PM
Re: Two factor is useful after the data breach
I know our folks and customers are big fans of Toopher. Lots of people using LastPass or KeePass (more on the former). I think two-factor authentication has a great place as part of a "defense in depth" approach, which starts with the data. That's what the hackers are going for, after all. 
GGRAJEK
100%
0%
GGRAJEK,
User Rank: Apprentice
2/7/2014 | 3:11:32 PM
Re: Duo Security
>  the security lies mainly with architecture from design perpsective instead of single piece of authentication technolog

Exactly.   I will be writing more about this.  If you want any more immediate readings, please go to:

http://blog.secureauth.com/cto

 
tstewart2k
100%
0%
tstewart2k,
User Rank: Apprentice
2/7/2014 | 3:11:13 PM
Re: Two factor is useful after the data breach
If someone stole your phone AND knew where you did business, they would also need your username and password, so that is more like 3 or 4 ppints of failure.  That is beside the point becuase you (person responsible for providing access to and protecting application(s) can add factors (1, 2, 3+...) and strength of those factors depending on the value of the data and the usability requirements.  By having the strong auth and SSO abstracted away form the guts of the app, there is flexibility to respond to threats and tweak auth methods in real without touching all the apps every time.  That is the beauty of what Garret is talking about.  It is a fundamentally better way in every way.  
GGRAJEK
100%
0%
GGRAJEK,
User Rank: Apprentice
2/7/2014 | 3:09:32 PM
Re: Great write up
Will do.   The Neiman Marcus, Living Social, Snapchat hacks have made us authentication guys "hip" again - and I thank InformationWeek for giving me the forum to write about what I love!
Page 1 / 4   >   >>
Register for Dark Reading Newsletters
White Papers
Flash Poll
Current Issue
Video
Slideshows
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2011-0460
Published: 2014-04-16
The init script in kbd, possibly 1.14.1 and earlier, allows local users to overwrite arbitrary files via a symlink attack on /dev/shm/defkeymap.map.

CVE-2011-0993
Published: 2014-04-16
SUSE Lifecycle Management Server before 1.1 uses world readable postgres credentials, which allows local users to obtain sensitive information via unspecified vectors.

CVE-2011-3180
Published: 2014-04-16
kiwi before 4.98.08, as used in SUSE Studio Onsite 1.2 before 1.2.1 and SUSE Studio Extension for System z 1.2 before 1.2.1, allows attackers to execute arbitrary commands via shell metacharacters in the path of an overlay file, related to chown.

CVE-2011-4089
Published: 2014-04-16
The bzexe command in bzip2 1.0.5 and earlier generates compressed executables that do not properly handle temporary files during extraction, which allows local users to execute arbitrary code by precreating a temporary directory.

CVE-2011-4192
Published: 2014-04-16
kiwi before 4.85.1, as used in SUSE Studio Onsite 1.2 before 1.2.1 and SUSE Studio Extension for System z 1.2 before 1.2.1, allows attackers to execute arbitrary commands as demonstrated by "double quotes in kiwi_oemtitle of .profile."

Best of the Web