AI & 'Fuzzing' Combination Empowers APT

When the bad guys add AI and 'fuzzing' to their armory, the advanced persistent threat gets, erm, even more threatening.

Zero-day vulnerabilities -- those flaws that are unknown to those who may be interested in mitigating the vulnerability -- have lately shown an increased presence in the security landscape.

Apple just patched two of them in an iOS update. Adobe's products have a long history of these kinds of vulnerabilities, as does Microsoft Windows.

These kinds of vulnerabilities emerge when some threat actor uses them in mounting an exploit which takes advantage of them. That's the point when it shows up on the security radar.

The method by which these vulnerabilities are discovered by a threat actor can be opaque. Reverse engineering has shown to be one of the most currently productive techniques for cybercriminals looking for holes in the security fence.

But this way of doing things is labor-intensive. It seems likely that criminals are going to be seeking other more productive techniques in the future to give them exploitable information.

The use of artificial intelligence along with machine learning could drastically increase the speed of finding chinks in the security armor of a program. Presently, a kind of analysis called "fuzzing" is also used by threat actors to find exploitable situations in code. This sort of analysis is not as widely used as other techniques, since it requires expertise in order for it to be effective.

But what will happen when the process is augmented by AI tools?

The fuzzing technique focuses a software tool so that it will by automated means inject invalid, unexpected, or semi-random data into the user-facing frontend of a program. The resulting output is monitored for unexpected outcomes like crashes, failing code assertions and memory leaks.

Fuzzing correlates unexpected and exploitable outcomes with certain kinds of inputs. That is just what someone looking to bust open software is looking for.

Now, if you put AI into the process -- especially one that can learn from its mistakes when doing it -- you will make the fuzzing process more efficient. It will not need as much human input or oversight as current techniques. Which can make threat actors more dangerous as they become more efficient at finding weaknesses.

There are counter efforts underway using fuzzing for proactively finding program faults before they can be exploited by others as a zero-day. Microsoft has already begun work on such efforts from a defensive, rather then attack, position.

Its latest research on neural fuzzing is bearing some promising results. It is able to learn from previously performed fuzzing, which is a hallmark of AI techniques.

Their AI prototype was found vulnerabilities that un-enhanced fuzzing missed, so it seems that the approach is practical and effective.

The enterprise will need these kinds of enhanced fuzzing tools to counteract those that attackers will develop. By understanding what these tools can reveal about software, a company can proactively defend against their use and hopefully avoid vulnerabilities that show up as zero-day exploits.

— Larry Loeb has written for many of the last century's major "dead tree" computer magazines, having been, among other things, a consulting editor for BYTE magazine and senior editor for the launch of WebWeek.