Organizations can start today to protect against 2019's threats. Look out for crooks using AI "fuzzing" techniques, machine learning, and swarms.

Derek Manky, Chief Security Strategist & VP Global Threat Intelligence, FortiGuard Labs

December 4, 2018

5 Min Read

To manage increasingly distributed and complex networks, organizations are adopting artificial intelligence (AI) and machine learning to automate tedious and time-consuming activities that normally require a high degree of human supervision and intervention. To address this transformation of the security ecosystem, the cybercriminal community has now clearly begun moving in the same direction.

My threat predictions, taken from Fortinet's Threat Landscape Predictions for 2019, reveal five emerging malicious trends:

1. AI Fuzzing: Because they target unknown threat vectors, exploiting zero-day vulnerabilities is an especially effective cybercrime tactic. Fortunately, they are also rare because of the time and expertise needed by cyber adversaries to discover and exploit them. The process for doing so involves a technique known as fuzzing.

Fuzzing is a sophisticated technique generally used in lab environments by professional threat researchers to discover vulnerabilities in hardware and software interfaces and applications. They do this by injecting invalid, unexpected, or semirandom data into an interface or program and then monitoring for events such as crashes, undocumented jumps to debug routines, failing code assertions, and potential memory leaks. Though using fuzzing to discover zero-day vulnerabilities has, so far, been beyond the scope of most cybercriminals, as AI and machine learning models are applied to this process it will become more efficient and effective. As a result, the rarity of zero-day exploits will change, which in turn will have a significant impact on securing network devices and systems.

2. Continual Zero-Days: While a large library of known exploits exists in the wild, our cyber adversaries are actually only exploiting less than 6% of them. However, to be effective, security tools need to be watching for all of them as there is no way to know which 6% they will use. Alsok as the volume of potential threats continues to grow, performance requirements will continue to escalate as the scope of the potential exploit landscape continues to expand. To keep up, security will tools need to be increasingly more intelligent about how and what they look for.

While there are some frameworks like zero-trust environments that may have a chance at defending against this reality, it is fair to say that most people are not prepared for the next generation of threats on the horizon — especially those that AI-based fuzzing techniques will soon begin to uncover. Traditional security approaches, such as patching or monitoring for known attacks, will become nearly obsolete as there will be little way to anticipate which aspect of a device can be potentially exploited. In an environment with the possibility of endless and highly commoditized zero-day attacks, even tools such as sandboxing, which were designed to detect unknown threats, would be quickly overwhelmed.

3. Swarms-as-a-Service: Advances in swarm-based intelligence technology are bringing us closer to a reality of swarm-based botnets that can operate collaboratively and autonomously to overwhelm existing defenses. These swarm networks will not only raise the bar in terms of the technologies needed to defend organizations, but, like zero-day mining, they will also have an impact on the underlying criminal business model, allowing them to expand their opportunity.

Currently, the criminal ecosystem is very people-driven. Professional hackers build custom exploits for a fee, and even new advances such as ransomware-as-a-service requires black-hat engineers to stand up different resources. But when delivering autonomous, self-learning swarms-as-a-service, the amount of direct interaction between a hacker-customer and a black-hat entrepreneur will drop dramatically, thereby reducing risk while increasing profitability.

4. A la Carte Swarms: Dividing a swarm into multiple tasks to achieve a desired outcome is very similar to virtualization. In a virtualized network, resources can spin up or spin down virtual machine as needed to address particular issues such as bandwidth. Likewise, resources in a swarm network could be allocated or reallocated to address specific challenges encountered in an attack chain. In a swarm-as-a-service environment, criminal entrepreneurs should be able to preprogram a swarm with a range of analysis tools and exploits, from compromise strategies to evasion and surreptitious data exfiltration that are all part of a criminal a la carte menu. And because swarms by design include self-swarms, they will require nearly no interaction or feedback from their swarm-master or need to interact with a command and control center, which is the Achilles' heel of most exploits.

5. Poisoning Machine Learning: One of the most promising cybersecurity tools is machine learning. Devices and systems can be trained to perform specific tasks autonomously, such as baselining behavior, applying behavioral analytics to identify sophisticated threats, or taking effective countermeasures when facing a sophisticated threat. Tedious manual tasks, such as tracking and patching devices, can also be handed over to a properly trained system. However, this process can also be a two-edged sword. Machine learning has no conscience, so bad input is processed as readily as good. By targeting and poisoning the machine learning process, cybercriminals will be able to train devices or systems to not apply patches or updates to a particular device, to ignore specific types of applications or behaviors, or to not log specific traffic to better evade detection.

Preparing for Tomorrow's Threats
Understanding the direction being taken by some of the most forward-thinking malicious actors requires organizations to rethink their current security strategy. Given the nature of today's global threat landscape, organizations must react to threats at machine speeds. Machine learning and AI can help in this fight. Integrating machine language and AI across point products deployed throughout the distributed network, combined with automation and innovation, will significantly help fight increasingly aggressive cybercrime. It is just important to remember, however, that these will soon be the same tools being leveraged against you, and to plan accordingly.

Related Content:
7 Real-Life Dangers That Threaten Cybersecurity
Rise of the 'Hivenet': Botnets That Think for Themselves
Defending Against an Automated Attack Chain: Are You Ready?

About the Author(s)

Derek Manky

Chief Security Strategist & VP Global Threat Intelligence, FortiGuard Labs

As Chief Security Strategist & VP Global Threat Intelligence at FortiGuard Labs, Derek Manky formulates security strategy with more than 15 years of cybersecurity experience. His ultimate goal is to make a positive impact toward the global war on cybercrime. Manky provides thought leadership to the industry, and has presented research and strategy worldwide at premier security conferences. As a cybersecurity expert, his work has included meetings with leading political figures and key policy stakeholders, including law enforcement, who help define the future of cybersecurity. He is actively involved with several global threat intelligence initiatives, including NATO NICP, Interpol Expert Working Group, the Cyber Threat Alliance (CTA) working committee, and FIRST, all in an effort to shape the future of actionable threat intelligence and proactive security strategy.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights