Getting Out Of PRISM
What we can learn from national security monitoring
Call this the bandwagon blog post. There has been more discussion around the U.S. government monitoring revelations than probably anybody wants to read about. Right wing, left wing, not even on a wing but already bailed out in a parachute -- everyone has an opinion.
If it's one thing I've learned during my career, it's that institutions are never monolithic. If you're referring to anything in the singular -- "the government wants to do this," or "Company X hates puppies" -- then you don't know enough about it. If you've ever been a manager, you know how hard it is to get even one other person to do things just the way you intended. Multiply that by thousands of employees, and it's pretty clear that nobody's marching in perfect lockstep. (By the way, this is also why grand conspiracy theories are bunk: Nobody's that good.)
So entities aren't monolithic, and there is always something going on behind the scenes that you don't know about -- and that might change your opinion on what you do know. For anything that sounds wrong, there is generally a reason behind it that made good sense at the time. This is why I'm not going to opine about the topic of national surveillance: I don't have enough background information (and I probably never will).
But we can draw lessons from this controversy for our own topic: enterprise security monitoring. I've written before about the privacy implications and logistical complexity of making your monitoring fit your policy. It's not just that you have to comply with data privacy laws in different jurisdictions. It's a matter of setting the right tone within your organization for the monitoring you need to do.
Can you justify each type of monitoring you perform and its granularity? Or are you just collecting everything because it's easier to sort it out later? (Also: Big Data!)
Do you have explicit notifications in place for this monitoring? For example, an employee might have to sign an acknowledgment form upon initial hire, which explains what types of monitoring are being performed on the systems, networks, and facilities, including any traffic to sites for personal use. Or you might have a sign next to the guest WiFi in the conference room that reads, "We reserve the right to monitor all traffic on our guest networks, and may log, alter, or block any traffic that we determine to be a security risk."
Do your employees know that you can dig up every page in their browsing history? Maybe they know it theoretically, but it doesn't hit home until they're sitting in HR, being faced with a PDF report of their Web usage. Do they know that you may be monitoring on a general level, but reserve the right to monitor an individual more closely at any time? Do they know who has access to that monitoring data and how often they look at it, or whether it's shared with anyone else?
This is a conversation (perhaps one-sided, but a conversation nevertheless) that every organization should have -- not just about what's technically feasible to monitor; not just about what monitoring is required or prohibited by regulations; but what monitoring is appropriate. And the policies should be transparent to employees, partners, customers, and anyone else who uses the systems.
Transparency is what was implied by the name PRISM, and transparency is what we didn't have. Now's the time to talk to your board about PRISM.
Wendy Nather is Research Director of the Enterprise Security Practice at the independent analyst firm 451 Research. You can find her on Twitter as @451wendy.
About the Author
You May Also Like
Cybersecurity Day: How to Automate Security Analytics with AI and ML
Dec 17, 2024The Dirt on ROT Data
Dec 18, 2024