Alan Turing is famous for several reasons, one of which is that he cracked the Nazis' seemingly unbreakable Enigma machine code during World War II. Later in life, Turing also devised what would become known as the Turing test for determining whether a computer was "intelligent" — what we would now call artificial intelligence (AI). Turing believed that if a person couldn't tell the difference between a computer and a human in a conversation, then that computer was displaying AI.
AI and information security have been intertwined practically since the birth of the modern computer in the mid-20th century. For today's enterprises, the relationship can generally be broken down into three categories: incident detection, incident response, and situational awareness — i.e., helping a business understand its vulnerabilities before an incident occurs. IT infrastructure has grown so complex since Turing's era that it can be months before personnel notice an intrusion.
Current iterations of computer learning have yielded promising results. Chronicle, which was recently launched by Google's parent company, Alphabet, allows companies to tap its enormous processing power and advanced machine learning capabilities to scan IT infrastructure for unauthorized activity. AI² quickly learns how to differentiate true attacks from merely unusual activity, alleviating a vexing problem for IT security teams: false positives. There are numerous other examples of AI-based solutions, such as Palo Alto Networks' Magnifier, which uses machine learning to automate incident response, utilizing another strength of AI: speed.
These advances arrive at an opportune moment because the risks from cybercrime are rapidly growing; estimates of the cost worldwide is about $600 billion annually. The average cost of a data breach is estimated at $1.3 million for enterprises and $117,000 for small businesses, and companies are taking note. According to ESG research, 12% of enterprise organizations have already deployed AI-based security analytics extensively, and 27% have deployed AI-based security analytics on a limited basis.
Moreover, cybersecurity in the years ahead will be increasingly challenging. Enterprises and computers are relatively static and well-defined at present, but securing information amid the Internet of Things, in which almost every device will be programmable and therefore hackable, is going to be far harder. Soon, we won't just have to safeguard unseen servers anymore but also our cars and household devices.
Unfortunately, AI has also become available to hackers as well. Dark Web developments to date merit serious discussion, such as machine learning that gets better and better at phishing — tricking people into opening imposter messages in order to hack them. Further down the road, machines could take impersonation one step further by learning how to build fake images. Experts are also worried AI-based hacking programs might reroute or even crash self-piloting vehicles, such as delivery drones.
I suspect that in the future, users on the front end will be blissfully unaware that behind the scenes battles between good and bad learning machines rage, with each side continually innovating to outsmart the other. Already, the synthesis of AI and cybersecurity has yielded fascinating results, and there is no doubt we are only at the beginning. I am reminded of a quote by Dr. Turing:"We can only see a short distance ahead, but we can see plenty there that needs to be done."
- 5 Questions to Ask about Machine Learning
- 5 Things Security Pros Need To Know About Machine Learning
- The Double-Edged Sword of Artificial Intelligence in Security
- Why Artificial Intelligence Is Not a Silver Bullet for Cybersecurity
Learn from the industry's most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early-bird rate ends August 31. Click for more info.