How AI and Cybersecurity Will Intersect in 2020
Understanding the new risks and threats posed by increased use of artificial intelligence.
December 30, 2019
So much of the discussion about cybersecurity's relationship with artificial intelligence and machine learning (AI/ML) revolves around how AI and ML can improve security product functionality. However, that is actually only one dimension of a much broader collision between cybersecurity and AI.
As applied use of AI/ML starts to advance and spread throughout a plethora of business and technology use cases, security experts are going to need to help their colleagues in the business start to address new risks, new threat models, new domains of expertise, and, yes, sometimes new security solutions.
Heading into 2020, business and technology analysts expect to see solid applications of AI and ML accelerate. This means that CISOs and security professionals will need to quickly get up to speed on AI-driven enterprise risks. Here are some thoughts from security veterans on what to expect from AI and cybersecurity in 2020.
The security industry will need to keep tabs on emerging cases of attackers seeking to poison AI/ML training data in business applications to disrupt decision-making and otherwise operations. Imagine, for example, what would happen with companies depending on AI to automate supply chain decisions. A sabotaged data set could result in drastic under- or oversupply of product.
"Expect to see attempts to poison the algorithm with specious data samples specifically designed to throw off the learning process of a machine learning algorithm," says Haiyan Song, senior vice president and general manager of security markets for Splunk. "It's not just about duping smart technology, but making it so that the algorithm appears to work fine - while producing the wrong results."
Business email compromise (BEC) has cost organizations billions of dollars as attackers pose as CEOs and other senior-level managers to trick people in charge of banking accounts to make fraudulent transfers in the guise of closing a deal or otherwise getting business done. Now attackers are taking BEC attacks to a new arena with the use of AI technology: the telephone. This year we saw one of the first reports of an incident where an attacker used deepfake audio to impersonate a company CEO over the phone in order to trick someone at a British energy firm to wire $240,000 to a specious bank account. Experts believe we will see increased use of AI-powered deepfake audio of CEOs to carry out BEC-style attacks in 2020.
"Even though many organizations have educated employees on how to spot potential phishing emails, many aren't ready for voice to do the same as they're very believable and there really aren't many effective, mainstream ways of detecting them," says PJ Kirner, CTO and founder of Illumio. "And while these types of 'voishing' attacks aren't new, we'll see more malicious actors leveraging influential voices to execute attacks next year."
Deepfakes are going to be just one way that the bad guys will leverage AI to perpetrate attacks. Security researchers are on tenterhooks waiting to discover AI-powered malware evasion techniques. Some believe 2020 will be the year they discover the first malware using AI-models to evade sandboxes.
"Instead of using rules to determine whether the 'features' and 'processes' indicate the sample is in a sandbox, malware authors will instead use AI, effectively creating malware that can more accurately analyze its environment to determine if it is running in a sandbox, making it more effective at evasion," predicts Saumitra Das, CTO of Blue Hexagon.
Expect to see a game of cat-and-mouse played in the fraud prevention world of financial services when it comes to the use of AI and biometric technology to onboard and authenticate customers. Financial institutions are rapidly iterating on authentication mechanisms that use facial recognition and AI to scan, analyze, and confirm online identity using mobile cameras and on-file government-issue IDs. But the bad guys will be forcing them to stay on their toes, as they use AI to create deepfakes that try to dupe these systems.
"In 2020, we will see an increase in deepfake technology being weaponized for fraud as biometric-based authentication solutions are widely adopted," says Robert Prigge, president of Jumio.
The combination of big data, AI, and strict privacy regulations is going to cause enterprise headaches until security and privacy professionals start innovating better ways to shield the kind of customer analytics that fuel a lot of AI applications today. The good news is that other forms of AI can be used to accomplish this.
"In the coming year, we will see practical applications of AI algorithms, including differential privacy, a system in which a description of patterns in a dataset is shared while withholding information about individuals," says Rajarshi Gupta, head of artificial intelligence at Avast. Gupta says differential privacy will allow companies "to profit from big data insights as we do today, but without exposing all the private details" of customers and other individuals.
There are some hard lessons ahead with AI ethics, fairness, and consequences. These issues are relevant to security leaders who are tasked with maintaining the integrity and availability of systems that rely on AI to operate.
"We are going to get a lot of new lessons from the usage of AI in cybersecurity this coming year. The recent story about Apple Card offering different credit limits for men and women has pointed out that we don't readily understand how these algorithms work," says Todd Inskeep, principal of Cyber Security Strategy for Booz Allen Hamilton and RSA Conference Advisory Board Member. "We are going to find some hard lessons in situations where an AI appeared to be doing one thing and we eventually figured out the AI was doing something else, or possibly nothing at all."
There are some hard lessons ahead with AI ethics, fairness, and consequences. These issues are relevant to security leaders who are tasked with maintaining the integrity and availability of systems that rely on AI to operate.
"We are going to get a lot of new lessons from the usage of AI in cybersecurity this coming year. The recent story about Apple Card offering different credit limits for men and women has pointed out that we don't readily understand how these algorithms work," says Todd Inskeep, principal of Cyber Security Strategy for Booz Allen Hamilton and RSA Conference Advisory Board Member. "We are going to find some hard lessons in situations where an AI appeared to be doing one thing and we eventually figured out the AI was doing something else, or possibly nothing at all."
So much of the discussion about cybersecurity's relationship with artificial intelligence and machine learning (AI/ML) revolves around how AI and ML can improve security product functionality. However, that is actually only one dimension of a much broader collision between cybersecurity and AI.
As applied use of AI/ML starts to advance and spread throughout a plethora of business and technology use cases, security experts are going to need to help their colleagues in the business start to address new risks, new threat models, new domains of expertise, and, yes, sometimes new security solutions.
Heading into 2020, business and technology analysts expect to see solid applications of AI and ML accelerate. This means that CISOs and security professionals will need to quickly get up to speed on AI-driven enterprise risks. Here are some thoughts from security veterans on what to expect from AI and cybersecurity in 2020.
About the Author(s)
You May Also Like
CISO Perspectives: How to make AI an Accelerator, Not a Blocker
August 20, 2024Securing Your Cloud Assets
August 27, 2024