Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


10:30 AM
Rick Grinnell
Rick Grinnell
Connect Directly
E-Mail vvv

How AI Can Help Prevent Data Breaches in 2018 and Beyond

Artificial intelligence startups are tackling four key areas that will help companies avoid becoming the next Equifax.

Equifax's stunning data breach is a major headache for some 145 million Americans who could face identity theft for the rest of their lives. The breach has forever tarnished Equifax's business and brand, and it has prompted the company to replace its CEO, CIO, and CSO. However, as we look at the coming year and as new technologies continue to evolve, it's clear that artificial intelligence (AI) can have a powerful role in helping prevent future data breaches.

If we look at how the Equifax breach occurred, there's a lot to learn — and even cause for optimism. As we know, it all came down to patching. Patching vulnerable, out-of-date software should be straightforward, but in reality, it never is. Although Equifax could have prevented this disaster, it's hardly the first company to neglect a critical software patch. A 2016 survey found that 80% of companies that suffered a breach could have avoided it if they had used an available update. So why don't organizations apply patches?

Sometimes, the delay in patching is simply due to inadequate resources or a lack of solid internal processes that require immediate identification of the vulnerable software, testing of the new patch, and deployment of the fix. Often, firms delay so they can test a patch before applying it — to make sure they aren't fixing one problem but creating others. Sometimes companies don't realize they are running software that is vulnerable. Because of the complexity of sprawling applications, popular vulnerability scanning products miss important pieces of the puzzle, leaving holes for attackers to exploit. It appears that a combination of factors played a role in Equifax's situation. (Here's another great read on the barriers of patching for reference.)

The bright side? AI is driving exciting advancements in information security. Security professionals must get plugged into new technologies and not rely only on old-school solutions or methods, because traditional tech solutions won't cut it (e.g., antivirus software). AI will fuel next-generation solutions — whether they're focused on endpoints, analytics, or behavioral analysis. With the amount and velocity of data, and the sheer number of connections to monitor and manage accelerating at an exponential rate, AI will be a critical component in preventing breaches like the one at Equifax.

How AI Could Help
Problems largely caused by human error specifically lend themselves to AI. Here are four areas that AI startups are investigating — and in some cases, are in the early stages of development:

1. Code development: Whether the software is from open source communities or from companies like Apple or Microsoft, one could ask why these vulnerabilities aren't being found while the code is being put into production. Why would the Apache Foundation distribute software that has an obvious vulnerability? The reason is, when you're talking about millions of lines of code and lots of new functionality, sometimes things get lost in the shuffle. There probably was rigorous testing in Equifax's case, but people tend to look for things they've seen before. The existing tools to check for such vulnerabilities are also hard-wired by humans. AI would allow you to think of things a human couldn't.

2. In-market testing: Once software is released to the market, there are products and service providers that find vulnerabilities in public-facing applications. Clearly, someone caught Equifax's problem, but it took a long time and the damage was already done. AI would make testing and vulnerability-scanning tools more useful and close the gap between putting something into production that's unsafe and knowing it's unsafe.

3. Checking the patches: One reason that organizations (and people) are reluctant to download patches is that they often render old apps inoperable or cause them to lose functionality. Wouldn't it be great if there was intelligence to look at the code and provide higher confidence that downloading the patch wasn't going to break your application?

4. Benchmarking: Being a CISO isn't a very attractive proposition if you're likely to get fired if and when a major breach occurs. Because no one can prevent attacks 100% of the time, how can you hold security officers accountable in a fair way? One idea is to use AI to look at your industry category (such as banking or retail) and examine the firewalls, endpoint and other security products you're using and how they are configured in your overall security stack. When you look at this list of complex configurations, you get an inter- and intra-company set of metrics. With AI monitoring and analyzing of this data, you can see how you stack up against your peer group. Even if there were to be a security incident, you could let your board of directors know that you had gone above and beyond what your peers are doing by every other measure, perhaps saving your job.

There are other applications, too. AI could be used to find a personalized way to remind you to install a patch that makes it impossible to ignore, or more precisely find all of the application instances that need to be fixed. The bottom line is that AI is a powerful tool at our disposal to help avoid becoming the next big breach target.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry's most knowledgeable IT security experts. Check out the INsecurity agenda here.


Rick Grinnell is Managing Partner at Glasswing Ventures, an early-stage venture capital firm dedicated to building the next generation of AI technology companies that connect consumers and enterprises and secure the ecosystem. As a venture capitalist and seasoned operator, ... View Full Bio
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
7 Tips for Infosec Pros Considering A Lateral Career Move
Kelly Sheridan, Staff Editor, Dark Reading,  1/21/2020
For Mismanaged SOCs, The Price Is Not Right
Kelly Sheridan, Staff Editor, Dark Reading,  1/22/2020
Register for Dark Reading Newsletters
White Papers
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
IT 2020: A Look Ahead
Are you ready for the critical changes that will occur in 2020? We've compiled editor insights from the best of our network (Dark Reading, Data Center Knowledge, InformationWeek, ITPro Today and Network Computing) to deliver to you a look at the trends, technologies, and threats that are emerging in the coming year. Download it today!
Flash Poll
How Enterprises are Attacking the Cybersecurity Problem
How Enterprises are Attacking the Cybersecurity Problem
Organizations have invested in a sweeping array of security technologies to address challenges associated with the growing number of cybersecurity attacks. However, the complexity involved in managing these technologies is emerging as a major problem. Read this report to find out what your peers biggest security challenges are and the technologies they are using to address them.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2020-01-27
A double-free vulnerability in vrend_renderer.c in virglrenderer through 0.8.1 allows attackers to cause a denial of service by triggering texture allocation failure, because vrend_renderer_resource_allocated_texture is not an appropriate place for a free.
PUBLISHED: 2020-01-27
In the Lustre file system before 2.12.3, the ptlrpc module has a buffer overflow and panic, and possibly remote code execution, due to the lack of validation for specific fields of packets sent by a client. Interaction between req_capsule_get_size and tgt_brw_write leads to a tgt_shortio2pages integ...
PUBLISHED: 2020-01-27
In the Lustre file system before 2.12.3, the ptlrpc module has an out-of-bounds read and panic due to the lack of validation for specific fields of packets sent by a client. The ldl_request_cancel function mishandles a large lock_count parameter.
PUBLISHED: 2020-01-27
In the Lustre file system before 2.12.3, the ptlrpc module has an out-of-bounds read and panic (via a modified lm_bufcount field) due to the lack of validation for specific fields of packets sent by a client. This is caused by interaction between sptlrpc_svc_unwrap_request and lustre_msg_hdr_size_v2...
PUBLISHED: 2020-01-27
In the Lustre file system before 2.12.3, the mdt module has an LBUG panic (via a large MDT Body eadatasize field) due to the lack of validation for specific fields of packets sent by a client.