Imagine a strain of malware hidden on your colleague's computer. It watches their every move, quietly listening and learning as it sifts through their email, calendar, and messages. In the process, it doesn't just learn their writing style. It learns the unique way they interact with nearly everyone in their life.
It picks up on the inside jokes they share with their spouse. It knows the formal tone they employ with their CEO. And it recognizes the familiar cadence they use with one of their most frequent contacts: You.
Their emails to you are often casual. And before important meetings, they are prone to sending you a friendly message of encouragement. One day, as you prepare for a morning meeting with a client, you get an email from them. It reads:
I'll see you at 9 for our call. You're gonna kill it today.
See dial-in details for the call attached.
Most people wouldn't question the legitimacy of this email – it's characteristically laid back and your email browser tells you it is from a trusted contact. But in reality, the attachment is a malicious payload that, if opened, would start rapidly encrypting data and hold your company's files hostage for a $30,000 ransom.
This example is hypothetical, but it's far from impossible. With the emergence of offensive artificial intelligence (AI), we are at the precipice of a new era of email attacks that move away from the low-grade attacks of yesterday, such as that long-lost relative explaining to you in broken English the large sum of inheritance you are owed.
Today, we are moving toward a much more subtle and dangerous form of attack that masquerades as your most trusted contacts and blends into the daily noise of your digital interactions. As offensive AI emerges on the threat landscapes horizon, it becomes increasingly crucial for defenders to seek tools that can separate the signal from the noise.
AI: The Good, the Bad, and the Ugly
Artificial intelligence is influencing our lives in so many ways, from healthcare to smart cities. For example, BlueDot AI picked up on a cluster of "unusual pneumonia" cases happening around a market in Wuhan, China, and flagged it, nine days before the World Health Organization clocked it. It is now being used to crunch literature around the disease and its DNA in order to come up with the right medical compounds for a cure.
Elsewhere, densely populated urban areas are using AI to mitigate traffic density and accidents. Sensors installed at parking lots, traffic signals, and intersections use AI to correlate data for the governments to plan their city initiatives.
But AI won't just be used for good. Inevitably, it will also open the door for sophisticated cyberattacks like the threat spelled out above. Indeed, AI will supercharge spear-phishing with automated, intelligent technology. Hyper-realistic, machine-written copy is not some distant fiction. Rather, the technology required for this already exists today.
From Google's DeepMind to voice-recognition software like Amazon's Alexa, machines can now recognize and copy subtle patterns in human behavior. Recently, OpenAI's language generator, GTP3, autonomously wrote an entire, coherent article published in The Guardian on what it's like to be a robot. In the wake of these developments, an email from your colleague would be child's play for an even moderately advanced AI.
Artificial intelligence won't just power phishing attacks either. It will augment every kind of cyberattack with adaptive decision-making capabilities. Automatically crafting a well-informed, well-written email containing a malicious payload is just the start; the inbox is simply a gateway into the organization.
Once inside those gates, AI will supercharge every subsequent step of the attack kill chain – cracking even complex passwords in seconds, autonomously finding the optimal pathway to its final target, and exfiltrating only relevant, sensitive, and valuable documents at machine speed and stealth.
Fight AI With AI
To keep pace with intelligent, unpredictable threats, cybersecurity will have to adopt an intelligent security of its own. The legacy approach used by many email security vendors – which relies on predefined rules and signatures based on yesterday's attacks – is no longer sufficient in the age of offensive AI. These tools may catch spam and other low-hanging fruit, but in the face of advanced and novel email threats, they don't stand a chance.
Cybersecurity firm Darktrace uses AI on the defensive side to gain a complex, nuanced, and continuously evolving understanding of each individual email user – learning how they behave, when and where they typically log in, and how they typically communicate. Rather than measuring an inbound email against a list of "known bads," it analyzes thousands of metrics around the email and asks, "Is this email unusual or anomalous?"
This enables the technology, Antigena Email, to step in and decisively neutralize malicious emails that fall outside of the sender or the recipient's typical "pattern of life." Hundreds of organizations that have adopted this fundamentally new approach to email security have reported far higher catch rates, with advanced threats that slipped through traditional tools spotted and stopped before they reach the inbox.
Darktrace's self-learning AI technology has already caught a range of creative attacks, from fake invoices claiming to come from a familiar supplier to an impersonation of a board member targeting several high-profile figures in an organization. With open source AI tools now at an attacker's disposal, these threats are only going to become increasingly advanced, making defensive AI an ever more vital technology.
Hackers are constantly looking to outsmart and outpace defenders, and they will no doubt harness the power of machine learning to supercharge their attacks in the near future. In the ever-evolving cat-and-mouse game between cybercriminals and security professionals, defenders must themselves adopt cutting-edge AI technology to stay ahead of the threats.
Mariana Pereira is the director of email security products at Darktrace, with a primary focus on the capabilities of AI cyber defenses against email-borne attacks. Mariana works closely with the development, analyst, and marketing teams to advise technical and nontechnical audiences on how best to augment cyber resilience within the email domain, and how to implement AI technology as a means of defense. She speaks regularly at international events, with a speciality in presenting on sophisticated, AI-powered email attacks. She holds an MBA from the University of Chicago, and speaks several languages, including French, Italian, and Portuguese.