News, news analysis, and commentary on the latest trends in cybersecurity technology.
Now that attackers can bypass preventative controls, we need to find and stop the attackers when they're already inside.
February 7, 2022
3 Min Read
Source: peshkov via Adobe Stock
Multifactor authentication (MFA) became mainstream in 2021. Google began pushing to make MFA its default for all users. The Biden administration even required all federal agencies and contractors to implement MFA in its Executive Order on Improving the Nation's Cybersecurity.
MFA adds in extra layers of verifying a user's identity so that attackers cannot compromise an account solely with credentials. It includes measures such as biometric access (e.g., fingerprints), personal information, and codes sent to a second device or account.
While MFA is necessary, it is ultimately only a perimeter defense. And no matter how tightly sealed the perimeter of a digital environment may be, attackers will still slip in. Mitigating these threats means accepting that breaches are inevitable and implementing cyber defense technologies that can detect and respond to threats once an intruder is already inside your system. In this article, we'll delve into a real-world attack scenario where attackers successfully evaded MFA but were spotted and stopped by our artificial intelligence (AI).
Breaking Into a Microsoft 365 Account
A member of the financial team at a company with over 10,000 Microsoft 365 users had their account hijacked despite the company having MFA security in place. The attacker successfully passed MFA by manipulating the user's details, modifying the registered phone number so the authentication text message was sent directly to them. The attacker used social engineering to achieve the initial phone number change in this instance, but a variety of methods exist to bypass MFA.
In this case, the security team was also relying on AI as another line of defense to detect and respond to attacks that made it past the perimeter.
After the initial intrusion, the AI detected a series of suspicious logins where the Microsoft 365 account was accessed from unusual locations in the US and Ghana. This was atypical for the specific organization and user, not based on global trends or abstract threat intelligence.
The AI then discovered the attacker changing email rules on the victim's account, as well as a number of shared inboxes, including one related to credit control. The attacker could have been undertaking several malicious activities, such as seeking sensitive data or learning the victim's writing style to craft compelling phishing emails to further targets. The intruder also deleted multiple emails in an apparent attempt to conceal their presence.
How AI Investigated the Threat
While this attack successfully managed to sidestep the other security tools in this organization's arsenal, it was still identified because the company was using AI to monitor Microsoft 365 for unusual behavior. By using machine learning to investigate, the AI technology connected the dots between these suspicious events to form a cohesive outline of an account takeover. The resulting incident summary report gave the security team the details they needed to take action. As a result, they were able to react before the threat actor could wreak major damage by exploiting critical shared mailboxes.
Intellectual property and sensitive financial information regarding the organization and its customers could have been accessed by the attacker had the threat progressed. This information could have set off a chain reaction that enabled future requests for fraudulent payments, potentially leading to costs exceeding tens of thousands of dollars.
Defending Inside and Out
Using preventive controls like MFA remains an important strategy for defending digital environments in-depth, but attackers will find their way around these hurdles and continue to innovate as target environments complexify and expand. The rapid adoption of software-as-a-service (SaaS) platforms has made digital environments even more unwieldy.
Fortunately, the AI was able to understand the pattern of life for all users across the compromised organization's cyber ecosystem, identifying the subtle signs of threat behavior. Further, the enterprisewide application allowed the AI to operate throughout the entire digital environment, including covering attacks in the cloud and SaaS. By spotting and stopping the threat at its earliest stages, AI prevented the compromise from escalating into a crisis.
About the Author(s)
Director of Threat Hunting, Darktrace
Max is a cyber security expert with over a decade of experience in the field, specializing in a wide range of areas such as Penetration Testing, Red-Teaming, SIEM and SOC consulting and hunting Advanced Persistent Threat (APT) groups. At Darktrace, Max oversees global threat hunting efforts, working with strategic customers to investigate and respond to cyber-threats. He works closely with the R&D team at Darktrace’s Cambridge UK headquarters, leading research into new AI innovations and their various defensive and offensive applications. Max’s insights are regularly featured in international media outlets such as the BBC, Forbes and WIRED. When living in Germany, he was an active member of the Chaos Computer Club. Max holds an MSc from the University of Duisburg-Essen and a BSc from the Cooperative State University Stuttgart in International Business Information Systems.
You May Also Like