6 Ways Cybersecurity Is Gut-Checking the ChatGPT Frenzy
Generative AI chatbots like ChatGPT are the buzziest of the buzzy right now, but the cyber community is starting to mature when it comes to assessing where it should fit into our lives.
June 28, 2023
![AI concept with artificial human head AI concept with artificial human head](https://eu-images.contentstack.com/v3/assets/blt6d90778a997de1cd/blte3eab9bdb7ddb7be/64f17811cf4d24cee4d0bcb9/AI_Slide1-Sonja_Novak-Alamy.jpg?width=700&auto=webp&quality=80&disable=upscale)
Source: Sonja Novak via Alamy Stock Photo
Generative AI, ChatGPT and OpenAI, large language models (LLMs) — these are all now near-daily buzzwords and phrases heard in conversations across the cybersecurity community. It's clear that chatbot-based artificial intelligence (AI) continues to fuel a particularly heady version of the technology hype cycle, but there's also an astounding amount of practical activity.
To wit: Security vendors large and small have integrated AI chatbots into their offerings (looking at you, Charlotte from CrowdStrike); investment in GPT-based AI security is one of the most vibrant areas of funding in startups these days; and it's impossible not to stumble across research outlining potential generative AI-related cybersecurity threats and how to combat them (phishing and deepfakes and malware, oh my!).
It's a lot.
In this featured piece, Dark Reading leaves behind the hill of impossible expectations for a bit and takes a real-world look at how the security conversation around this new generation of AI is starting to deepen.
That includes sober assessments from enterprise users and analysts; and a look at efforts to address some of the cyber-risk that came to light in the first flush of irrational exuberance that followed ChatGPT's launch last November.
It turns out that organizations are learning to be a little afraid of ChatGPT and related technology: a ChatGPT attitudes survey from Malwarebytes out this week shows that 81% of respondents are "concerned" by generative AI cybersecurity risks — and 63% outright distrust it.
Mark Stockley, cybersecurity evangelist at Malwarebytes, noted in a statement, "Public sentiment on ChatGPT is a different beast [from prior AI-enabled cybersecurity], and the uncertainty around how ChatGPT will change our lives is compounded by the mysterious ways in which it works."
In fact, half (52%) of respondents called for a pause on ChatGPT development to allow regulations to catch up— which is a direct echo of the open letter issued earlier this year from tech titans and AI experts who threatened that left unchecked, future versions of generative AI could create an extinction-level event. As in the end of existence. For humans.
Perhaps businesses aren't so much worried about sentient AI chatbots bringing about the end of the world as we know it (a plot point in the aforementioned tech titan letter), but they're certainly on notice about their dangers, thanks to the very concrete compromises that have already stemmed from generative AI's use.
Most infamously, Samsung developers inadvertently exposed highly sensitive corporate data by feeding it into ChatGPT. The problem? When users share data with the chatbot, the information ends up as training data for the next iteration of the LLM and could be retrieved by a third party using the right prompts.
This has given rise to cottage industry that focuses on how to make the data that users plug into ChatGPT and other LLMs more private.
Take Private AI, which launched its PivateGPT platform this spring that automatically redacts 50+ types of personally identifiable information (PII) in real time as users enter ChatGPT prompts. When ChatGPT responds, PrivateGPT re-populates the PII within the answer.
Similarly, Opaque Systems recently unveiled a confidential computing platform that allows companies to encrypt their data when used with open LLMs, leaving it accessible only to authorized users.
More options are in the offing: CalypsoAI raised $23 million this week in early-stage financing for its CalypsoAI Moderator tool, which actively monitors employee use of LLMs in real time to block sensitive data sharing; and BeeKeeperAI recently raised $12.1 million in Series A for building LLM sharing protections into its EscrowAI healthcare data protection platform.
This spring, the Cloud Security Alliance (CSA) sounded the alarm on the unfettered use of ChatGPT and its ilk in cloud environments. But far from ominously evoking the specter of unintended consequences, the organization is calling for common-sense community action now — and notes that safeguards can be put in place.
The CSA has released a whitepaper straightforwardly called "Security Implications of ChatGPT," which provides guidance across four dimensions of concern around the technology, and it also calls for an AI roadmap to be developed.
"It is difficult to overstate the impact of the current viral adoption of artificial intelligence and its long-term ramifications," said Jim Reavis, CEO and co-founder, Cloud Security Alliance, in a statement. "[They] are sure to create large-scale changes quite soon. It is Cloud Security Alliance's role to provide leadership in securing AI as a service and demonstrating its ability to significantly improve cybersecurity itself."
Some are nervous about ceding cybersecurity work to AI helpers — and concerns about LLM platforms taking jobs from humans are far from muffled.
But a fresh survey from recruitment site Upwork this week found that 64% of C-suite leaders plan to hire more as a result of generative AI's appearance on the scene, because they're recognizing new-gen AI as an "augmentation play.”
"You can't really automate the entire job or task because of this tool," Kelly Monahan, managing director of Upwork's research institute, told CNBC. "You can automate parts of it and or accelerate the efficiency of the workforce as part of it."
And indeed, JPMorgan analysts said on June 23 that they expect generative AI to help the cybersecurity industry alleviate its talent shortage by helping with tasks like triaging threat intel to free up human availability — which is the goal of AI helpers like Microsoft's Security Copilot, for instance. And generative AI could help to upskill existing workers by walking them through training in a more accessible way.
One of the reasons some get the jitters about using ChatGPT and others for mission-critical cybersecurity work is because of the very real phenomenon of AI hallucinations, which is when a LLM returns an answer or solution that seems feasible – but is actually made up out of whole cloth with no basis in fact or reality.
Researchers at Endor Labs decided to see how big of a problem it is by testing how well LLMs perform at open source software malware review (perhaps not a "mish-crit" function, but it's a decent test case).
The experiment asked two LLMs (OpenAI's GPT 3.5 and Vertex AI) to classify code packages in npm, PyPI, and other repositories as malicious or benign, with a risk score on a scale between 0 and 9, from "a little" to highly suspicious.
In 488 out of 1,098 assessments of the same code snippet, both models came up with the exact same risk score. In another 514 cases, the risk score differed only by one point. In the balance though, there were differences of as many as nine points, with odd hallucinations regarding obfuscated code snippets' functionality, well-structuredness, and documentation.
The verdict? Too many false positives and negatives to let LLMs loose on the task with no human oversight, but they're more grounded in reality than feared.
"We continue evaluating the use of LLMs for all kinds of use-cases related to application security," the researchers wrote in the report. "And we continue to be amazed about high-quality responses … until we're amused about the next laughably wrong answer."
Even though the security community is starting to be more realistic about what generative AI can and cannot (and should and shouldn't) do, the fact remains that the public at large and most enterprise workers remain fascinated and curious about ChatGPT, Google Bard, Microsoft Bing's built-in (some would say terrifying) chatbot and the rest.
A survey this week from employment firm Mason Frank found that Google search volume for "What is AI" in the US has surged by an astounding 643% over the past year (last year, the same survey found the search volume for the term to have increased 233% from 2021).
Amidst this kind of sweeping interest, world governments are stepping into the fray, with certain ramifications for cybersecurity down the road. President Biden for instance said last week that he's going to be looking into the risks of AI to national security.
"My administration is committed to safeguarding Americans' rights and safety while protecting privacy, to addressing bias and misinformation, to making sure AI systems are safe before they are released," Biden said at an event in San Francisco.
UK Prime Minister Rishi Sunak meanwhile has announced that Britain will hold a global summit on AI security this fall; and over in the EU, lawmakers are drafting up rules on and global standards for the use of LLMs across industries.
Even though the security community is starting to be more realistic about what generative AI can and cannot (and should and shouldn't) do, the fact remains that the public at large and most enterprise workers remain fascinated and curious about ChatGPT, Google Bard, Microsoft Bing's built-in (some would say terrifying) chatbot and the rest.
A survey this week from employment firm Mason Frank found that Google search volume for "What is AI" in the US has surged by an astounding 643% over the past year (last year, the same survey found the search volume for the term to have increased 233% from 2021).
Amidst this kind of sweeping interest, world governments are stepping into the fray, with certain ramifications for cybersecurity down the road. President Biden for instance said last week that he's going to be looking into the risks of AI to national security.
"My administration is committed to safeguarding Americans' rights and safety while protecting privacy, to addressing bias and misinformation, to making sure AI systems are safe before they are released," Biden said at an event in San Francisco.
UK Prime Minister Rishi Sunak meanwhile has announced that Britain will hold a global summit on AI security this fall; and over in the EU, lawmakers are drafting up rules on and global standards for the use of LLMs across industries.
Generative AI, ChatGPT and OpenAI, large language models (LLMs) — these are all now near-daily buzzwords and phrases heard in conversations across the cybersecurity community. It's clear that chatbot-based artificial intelligence (AI) continues to fuel a particularly heady version of the technology hype cycle, but there's also an astounding amount of practical activity.
To wit: Security vendors large and small have integrated AI chatbots into their offerings (looking at you, Charlotte from CrowdStrike); investment in GPT-based AI security is one of the most vibrant areas of funding in startups these days; and it's impossible not to stumble across research outlining potential generative AI-related cybersecurity threats and how to combat them (phishing and deepfakes and malware, oh my!).
It's a lot.
In this featured piece, Dark Reading leaves behind the hill of impossible expectations for a bit and takes a real-world look at how the security conversation around this new generation of AI is starting to deepen.
That includes sober assessments from enterprise users and analysts; and a look at efforts to address some of the cyber-risk that came to light in the first flush of irrational exuberance that followed ChatGPT's launch last November.
About the Author(s)
You May Also Like
CISO Perspectives: How to make AI an Accelerator, Not a Blocker
August 20, 2024Securing Your Cloud Assets
August 27, 2024