Security in Knowing: An Interview With Nathaniel Gleicher, Part 1

Nathaniel Gleicher, former Director of Cybersecurity Policy for the Obama White House and ex-senior counsel for the US Dept. of Justice’s computer crimes division, knows something about security.

You know you've been hammered by bad news when you start looking for a silver lining in a 6 million name identity heist, but here I am. And I want to be clear, for Verizon and (especially) its customers, there really isn't a silver lining. Personal data theft is awful under any circumstances. That said, I still had to cover the breach, and that meant I got to talk to some interesting people.

One of those interesting people happens to have been Nathaniel Gleicher, former Director of Cybersecurity Policy for the Obama White House and ex-senior counsel for the US Dept. of Justice’s computer crimes division. Gleicher is now head of cybersecurity strategy for Illumio. He and I got together on a phone call and started talking about the Verizon brief but quickly expanded into a conversation about the wider issues of security and privacy.

The conversation was so long and so filled with information on how to protect an environment that we're presenting it in two pieces. One of the keys introduced in Part One is the idea that the Secret Service's Protection Detail provides a solid model for defending your network and computers. Tomorrow, in Part Two, we'll talk about what that means and how protection doesn't have to mean unnecessary limitations for your users.

What follows is an edited version of our conversation.

Curt Franklin: I want to start by asking a huge question: Given that this is the latest in a long string of enormous breaches, is there anyone left on the Internet whose data is intact and secure?

Nathaniel Gleicher: I think we used to say that there are two types of companies out there: companies that will admit they've been breached and companies that don't know it yet. I'm inclined to say that you can say the same thing about individuals, that are individuals who've had some of their data stolen and it those that don't realize it. The funny thing about this breach is it was 6 million users. But the honest truth is, that's large; the previous breach that happened eight months ago was eight million voters. The numbers are so large now that it's almost hard to keep track.

CF: When 6 million is getting into the category of "run of the mill" breach, it's possible we've reached some sort of massive tipping point.

NG: The common thread is that all of them [involve] information exposed through misconfiguration and user error. It's a reminder that no matter how disciplined your security team is, if your organization has to solve security problems manually they're going to make a mistake. And if your security is only sort of one layer deep, that mistake is eventually going to expose a whole bunch of data.

CF: Is this validation of the people who have doubted the security of cloud services? Is there something inherent in the configuration of cloud services that makes them either more difficult to configure in a secure way or simply more susceptible to misconfiguration?

NG: The interesting thing about this particular example is that we're talking about S3 [Amazon Web Service's storage service] specifically. By default, S3 buckets are configured to not be publicly addressable and publicly accessible. Someone at the organization went in at night and actually changed the permissions for this S3 bucket to make it publicly addressable. So I think the story here is a little broader than S3 in some sense.

The story is any manual security control is going to be at risk. These are both examples of cloud breaches, but I've seen statistics that suggest that something upwards of 97% or even 99% of all breaches involve a misconfigured firewall at some point. I don't think the lessons is about cloud in particular. Cloud can certainly be secure. The protocols and tool that you use may be different from the data center but it can certainly be secure.

The problem is our environments are so complex, they're so hybrid, they're so dynamic, that expecting humans are going to keep up with this and sort of handle this problem on their own is just impractical. What you really need is tools that make it so you don't need to solve these on a retail level and you don't need to make sort of specific one-off decisions about security protocols.

Want to learn more about the technology and business opportunities and challenges for the cable industry in the commercial services market? Join Light Reading in New York on November 30 for the 11th annual Future of Cable Business Services event. All cable operators and other service providers get in free.

CF: Well it sounds strangely as though you're leading to one of those conclusions that we hear at Gartner and other conferences where the takeaway line is that machine learning and system automation are the tools that are going to help save us from ourselves. Is that in fact where you're going or is there another lesson that you want to make sure we learn?

NG: I think automation orchestration is a very important component. One of the really interesting things about AI and machine learning and whatever other sort of flavor you want to use to describe intelligence systems is that there's a lot of buzz around it, and they become very buzz-wordy. The thing that I think is interesting though, is that there's a heavy focus on using intelligence systems to do aberration detection, pattern matching, solutions like this. And what's interesting about it to me is that humans are actually pretty good at aberration detection.

We were sort of built as aberration-detecting machines. If you're living out in the wilderness and you see something different than you've ever seen before, that could be a risk. So we're really good at seeing those things. What we're not terribly good at is dealing with these incredibly complex and dynamic environments. We've created these incredibly hybrid and distributed networks and computer systems. AI is a really important tool, or here I'll say not general-purpose [AI] but specific AI and machine learning and intelligence systems. But what would be really powerful and what I would like to see more of is for us to turn AI on the problem that we don't solve terribly well.

Turn AI on simplifying the environment and choices we have to make. Turn AI on making sense of these complex environments and let humans do what humans do really well, which is aberration detection if you give them an environment that they can understand and work in.

CF: You've talked about letting the machines do what they do well so that humans can do what we do well, and I certainly see that. But at a deeper level is the ultimate solution to design simpler systems? Is the answer to return to a certain Zen-like simplicity in the systems we design so they get back to some sort of human scale and we're all tapping out character-based things on a simple character-based computer?

NG: I suppose from a security perspective it might be nice if we could end up there. Practically, no I don't think that's the answer and realistically, I don't think that would ever happen anyway, in part because of the reason why we're building more complex systems. We're doing it for very specific business reasons.

These systems are powerful and they let us do things we could never do before, and the drive to leverage that complexity isn't going to go away. One of my convictions is that a lot of the problems we face in cybersecurity actually have answers that were worked out in the context of physical security. We tend to tell ourselves that the network is so dynamic and so completely different that there's nothing that we can learn. But there's actually quite a bit.

If you look at some of these lessons it's striking the way in which the challenges we're facing pop into relief. So I'll give you an example: think about the way virtually any effective physical security team operate. I like to use the Secret Service because I think the way they protect the president is a surprisingly good model for the way we protect our data centers, because their goal is risk management.

The president is always exposed in the same way that the valuable tools that we're defending are always exposed in the way they work and the way they invest in security. If you draw a pyramid and you break it into four horizontal slices and you put "understand" in the bottom the biggest slice, then you put "control" above it and you put "detect" above that and then in a little sort of tiny pyramid all the way at the top you put "respond" -- this is how physical security teams invest in security and it's written like this very intently because the bottom of the pyramid is where they invest the most.

If you think about protecting the president, we tend to think about agents standing in front of a president with guns. But actually most of the president's security starts months before he ever shows up. If the president's going to speak somewhere, the Secret Service shows up six months in advance and invests heavily in understanding that environment and in exerting control or transforming the environment to make it advantageous for them. Then they put detection in place, and then they respond if they detect a change in the environment. But they invest first to understand it because you can't control an environment you don't understand. You can't detect a changing environment you don't control, and you can't respond effectively if you don't have all those pieces.

Related posts:

— Curtis Franklin is the editor of SecurityNow.com. Follow him on Twitter @kg4gwa.

Read more about:

Security Now

About the Author(s)

Curtis Franklin, Principal Analyst, Omdia

Curtis Franklin Jr. is Principal Analyst at Omdia, focusing on enterprise security management. Previously, he was senior editor of Dark Reading, editor of Light Reading's Security Now, and executive editor, technology, at InformationWeek, where he was also executive producer of InformationWeek's online radio and podcast episodes

Curtis has been writing about technologies and products in computing and networking since the early 1980s. He has been on staff and contributed to technology-industry publications including BYTE, ComputerWorld, CEO, Enterprise Efficiency, ChannelWeb, Network Computing, InfoWorld, PCWorld, Dark Reading, and ITWorld.com on subjects ranging from mobile enterprise computing to enterprise security and wireless networking.

Curtis is the author of thousands of articles, the co-author of five books, and has been a frequent speaker at computer and networking industry conferences across North America and Europe. His most recent books, Cloud Computing: Technologies and Strategies of the Ubiquitous Data Center, and Securing the Cloud: Security Strategies for the Ubiquitous Data Center, with co-author Brian Chee, are published by Taylor and Francis.

When he's not writing, Curtis is a painter, photographer, cook, and multi-instrumentalist musician. He is active in running, amateur radio (KG4GWA), the MakerFX maker space in Orlando, FL, and is a certified Florida Master Naturalist.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights