How AI Is Shaping the Future of Cybercrime
Cybercriminals are increasingly using AI tools to launch successful attacks, but defenders are battling back.
COMMENTARY
As cybersecurity experts predicted a year ago, artificial intelligence (AI) has been a central player on the 2023 cybercrime landscape, driving an increase of attacks while also contributing to improvements in the defense against future attacks. Now, heading into 2024, experts across the industry expect AI to exert even more influence in cybersecurity.
The Google Cloud Cybersecurity Forecast 2024 sees generative AI and large language models contributing to an increase in various forms of cyberattacks. More than 90% of Canadian CEOs in a KPMG poll think generative AI will make them more vulnerable to breaches. And a UK government report says AI poses a threat to the country's next election.
While AI-related threats are still in their early stages, the volume and sophistication of AI-driven attacks are increasing every day. Organizations need to prepare themselves for what's ahead.
4 Ways Cybercriminals Are Leveraging AI
There are four main ways adversaries are using commonly available AI tools like ChatGPT, Dall-E, and Midjourney: automated phishing attacks, impersonation attacks, social engineering attacks, and fake customer support chatbots.
Spear-phishing attacks are getting a major boost from AI. In the past, it was easier to identify phishing attempts solely because many were riddled with poor grammar and spelling errors. Discerning readers could spot such odd, unsolicited communication, assuming it likely was generated from a country where English isn't the primary language.
ChatGPT pretty much eliminated the tip-off. With the help of ChatGPT, a cybercriminal can write an email with perfect grammar and English usage, styled in the language of a legitimate source. Cybercriminals can send out automated communications mimicking, for example, an authority at a bank requesting that users log in and provide information about their 401(k) accounts. When a user clicks a link to start furnishing information, the hacker takes control of the account.
How popular is this trick? The SlashNext State of Phishing Report 2023 attributed a 1,265% rise in malicious phishing emails since the fourth quarter of 2022 largely to targeted business email compromises using AI tools.
Impersonation attacks are also on the rise. Using ChatGPT and other tools, scammers are impersonating real individuals and organizations, carrying out identity thefts and fraud. Just like with phishing attacks, they use chatbots to send voice messages pretending to be a trusted friend, colleague, or family member in an attempt to get information or access to an account.
An example took place in Saskatchewan, Canada, in early 2023. An elderly couple received a call from someone impersonating their grandson claiming that he had been in a car accident and was being held in jail. The caller relayed a story that he had been hurt, had lost his wallet, and needed $9,400 in cash to settle with the owner of the other car to avoid facing charges. The grandparents went to their bank to withdraw the money but avoided being scammed when a bank official convinced them the request wasn't legitimate.
While industry experts believed this sophisticated use of AI voice-cloning technology would develop in a few years, few expected it to become this effective this quickly.
Cybercriminals are using ChatGPT and other AI chatbots to carry out social engineering attacks that foment chaos. They use a combination of voice cloning and deepfake technology to make it look like someone is saying something incendiary.
This happened the night before Chicago's mayoral election back in February. A hacker created a deepfake video and posted it to X, formerly known as Twitter, showing candidate Paul Vallas supposedly making false incendiary comments and spouting controversial policy standpoints. The video generated thousands of views before it was removed from the platform.
The last tactic, fake chatbots for customer service, do exist, but they're probably a year or two away from gaining wide popularity. A fraudulent bank site could be created using a customer service chatbot that appears human. The chatbot can be used to manipulate unsuspecting victims into handing over sensitive personal and account information.
How Cybersecurity Is Fighting Back
The good news is that AI is also being used as a security tool to combat AI-driven scams. Here are three ways the cybersecurity industry is fighting back.
Developing Their Own Adversarial AI
Essentially, this is creating "good AI" and training it to combat "bad AI." Developing their own generative adversarial networks (GANs), cyber firms can learn what to expect in the event of an attack. GANs consist of two neural networks: a generator that creates new data samples and a discriminator, which distinguishes the generated samples from the original samples.
Using these technologies, GANs can generate new attack patterns that resemble previously seen attack patterns. By training a model on these patterns, systems can make predictions about the kind of attacks we can expect to see and the ways cybercriminals are exploiting those threats.
Anomaly Detection
This is understanding the baseline of what normal behavior is and then identifying when someone deviates from that behavior. When someone logs into an account from a different location than usual or if the accounting department is mysteriously using a PowerShell system normally used by software developers, that could be an indicator of an attack. While cybersecurity systems have long used this model, the added technological horsepower AI models possess can more effectively flag messages that are potentially suspicious.
Detection Response
Using AI systems, cybersecurity tools and services like managed detection and response (MDR) can better detect threats and communicate information about them to security teams. AI helps security teams more rapidly identify and address legitimate threats by receiving information that is succinct and relevant. Less time spent on chasing false positives and attempting to decipher security logs helps teams launch more effective responses.
Conclusion
AI tools are opening society's eyes to new possibilities in virtually every field of work. As hackers take fuller advantage of large language model technologies, the industry will need to keep pace to keep the AI threat under control.
About the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024