"Garbage in, garbage out" is a maxim nearly as old as computers themselves. As automation becomes a greater factor in security, is it possible that we need to add "garbage in, security out" to the list of variants?
From the first recorded instance in 1963, garbage in, garbage out (or GIGO) has been a critical reminder that processing power is only as useful as the data that goes into the process. The best algorithms and programs will return useless information if they're fed bad data.
Bad data and the results that follow are hazardous enough when humans will read the information and perform additional analysis before acting; humans can (though the often don't) serve as a quality control agents for the process before things get wildly out of hand. In an automated system, though, the human QC agent is out of the loop and bad data can lead very quickly to bad action.
When it comes to security, automation is seen by many as the only rational path to meet future needs. The reasons are fairly straightforward; the number of attacks is going up as the volume of data in each attack also goes up. Add to that the rapid environmental changes that flow from virtualization, cloud computing and hybrid architectures, and you're at a situation where humans are simply too slow to keep up with all the activity.
The problem with relying on automation for enterprise security is that it means relying on massive amounts of data and complex algorithms to protect networks, compute assets and data. We rely on similar data sets and algorithms for many enterprise functions, but there is reason to be cautious when placing safety, economic health and corporate reputation in the hands of automated systems.
About a week ago a mathematician named Cathy O'Neil had a TED talk published. O'Neil is a frequent columnist for news organizations like Bloomberg and she is known for having a skeptical view of the way in which many organizations rely on data (especially big data) and algorithms. The title of her new book, Weapons of Math Destruction, says a lot about her attitude toward these tools.
Whether you agree with O'Neil or not, one of her major points is indisputable: If you're going to put your trust in an algorithm, you should fully understand the algorithm and thoroughly test the software that implements the algorithm. Next, you must insure that the data feeding the algorithms is meaningful and accurate. This is especially important when using big data as the foundation of security operations because it's entirely too easy to collect data that represents noise more than information.
There's no reason to completely avoid automation, but like any new application of technology it must be implemented with caution and care -- qualities that may or may not be abundant when cyber attacks are occurring all around you. Be careful out there -- whether fully automated or not.