The Misunderstood Security Risks of Behavior Analytics, AI & ML
By separating the hype from reality, the risks of relying on AI and ML to identify security threats become clear.
If you believe the hype, artificial intelligence (AI) and machine learning (ML) already play a vital role in securing the modern IT infrastructure. The truth is these are powerful but often misunderstood tools that, in some cases, can actually compromise a company's data security if not implemented correctly.
In many instances, "AI" is overused marketing jargon that doesn't accurately describe the technology in place, which falls short of true artificial intelligence. So-called "AI platforms" can leave CIOs scratching their heads wondering how it is possible to learn behaviors for each individual customer in a massive and growing customer database, or whether the platform is making educated guesses based on an algorithm. It can be difficult to tell the difference between real AI and standard fixed logic.
With cloud applications like Microsoft Teams, SharePoint, Microsoft 365, Google Drive, and more, end users — rather than an administrator — are allowed to define who can access files and folders. While this is very convenient for end users, it makes it nearly impossible to control access to company data in a standardized way that conforms to policy — because everyone can change permissions. The only way to really manage this problem would be some type of automated solution or crowdsourcing of access reviews across an organization.
Most organizations have such a high volume of data in their environment that many try to use AI as an automated solution to find and review access to only sensitive data. This saves users from being bombarded with reviewing millions of files tied to their permissions; instead, these solutions show only the subset (still possibly thousands) of files where permissions should be controlled. This seems sensible; however, it ignores any data that doesn't follow the patterns the algorithms are looking for and often generates false positives.
Three Problems Using AI for Behavior Analytics
The reality is there is no true AI solution in the current market for behavior analytics. True AI works by creating randomly generated algorithms and testing them against a large set of "correct" answers to find which algorithms work best. This brings up three important issues while using AI for behavior analytics.
1. No company has a large enough set of customer data to train an algorithm on. Even if they did, companies would not want to expose that information, as it would make them a huge attack vector for hackers.
2. Each customer is unique, so even if they could train their algorithms on their customer's data, it wouldn't necessarily work for their specific business.
3. If you train an algorithm on a customer-by-customer basis, you would be training the algorithm on your current system. This is great if you're already in an ideal state; if not, it will perpetuate any existing security issues.
Cloud and Remote Work Add Challenges
From a security perspective, cloud adoption has all sorts of data challenges, increased by employees working from home. They represent a growing base of end users suddenly granted access to increasing amounts of data.
Most employees without specific training are unaware of where the cloud begins and ends, leaving room for unintentional violations of company security policies. This is becoming a very common internal security threat for companies, especially when databases are programmed to use AI to prune data access. If not used properly, there are often serious vulnerabilities in this type of access.
Many companies claim to use AI to monitor and improve their data access. It is not generally used to sort or distribute data, as many assume AI can do. AI is most securely used as access controls for databases.
Blind Trust in AI Is Risky
Governing and securing data is as critical as ever, especially as remote- and hybrid-work trends continue. While AI and ML are powerful tools, companies need to understand whether they are leveraging the true technology or something in a clever wrapper that may not be up to the security task.
These technologies cannot be implemented in a vacuum, and businesses need to take decisive steps to mitigate critical security risks, such as employee training and governed access to ensure their data is secured. At the end of the day, AI is like any other computer program — bad data in, bad data out — where you move data from one place to another. With customer databases as large as they are and security violations unintentionally made by employees, it is critical to have a human component available to check the results, as risks lie in trusting AI blindly.
About the Author
You May Also Like
A Cyber Pros' Guide to Navigating Emerging Privacy Regulation
Dec 10, 2024Identifying the Cybersecurity Metrics that Actually Matter
Dec 11, 2024The Current State of AI Adoption in Cybersecurity, Including its Opportunities
Dec 12, 2024Cybersecurity Day: How to Automate Security Analytics with AI and ML
Dec 17, 2024The Dirt on ROT Data
Dec 18, 2024