6 Ways Cybersecurity Is Gut-Checking the ChatGPT Frenzy
Generative AI chatbots like ChatGPT are the buzziest of the buzzy right now, but the cyber community is starting to mature when it comes to assessing where it should fit into our lives.
June 28, 2023
Already have an account?
Generative AI, ChatGPT and OpenAI, large language models (LLMs) — these are all now near-daily buzzwords and phrases heard in conversations across the cybersecurity community. It's clear that chatbot-based artificial intelligence (AI) continues to fuel a particularly heady version of the technology hype cycle, but there's also an astounding amount of practical activity.
To wit: Security vendors large and small have integrated AI chatbots into their offerings (looking at you, Charlotte from CrowdStrike); investment in GPT-based AI security is one of the most vibrant areas of funding in startups these days; and it's impossible not to stumble across research outlining potential generative AI-related cybersecurity threats and how to combat them (phishing and deepfakes and malware, oh my!).
It's a lot.
In this featured piece, Dark Reading leaves behind the hill of impossible expectations for a bit and takes a real-world look at how the security conversation around this new generation of AI is starting to deepen.
That includes sober assessments from enterprise users and analysts; and a look at efforts to address some of the cyber-risk that came to light in the first flush of irrational exuberance that followed ChatGPT's launch last November.
About the Author
You May Also Like