Chat Cybersecurity: AI Promises a Lot, but Can It Deliver?
Machine learning offers great opportunities, but it still can't replace human experts.
Machine intelligence has captured the human imagination since the invention of the first modern computer in the mid-20th century. And the latest milestone in artificial intelligence, ChatGPT, has reinvigorated our pervasive interest in AI's ability to simplify the way we work.
ChatGPT's advancements in machine learning are impressive: Its high-quality outputs feel close to human. The strides it represents for AI systems signal that even more remarkable achievements are close to reality. But while ChatGPT suggests that the full potential of AI is drawing nearer, the reality is it still isn't quite here. Right now, there are great opportunities for machine learning to augment human intelligence, but it still cannot replace human experts.
The Obstacles Between ChatGPT and the Future It Promises
A complete reliance on AI can only happen if any given AI technology is proven more effective than all the other tools used to serve the same purpose. And AI hasn't yet developed to run entirely autonomously. Narrow AI (ANI) often describes the AI we see today: AI designed for a single task, such as a chatbot or image generator. ANI still requires human supervision and some manual configuration to function and is not always run and trained upon the latest data and intelligence. In this case, Reinforcement Learning from Human Feedback (RLHF), which allows the model to learn from correct and incorrect responses to requests, helped train ChatGPT.
Our human inputs into AI systems pose a significant challenge as well. Machine learning models can be colored by our personal views, culture, upbringing, and world perspectives, limiting our ability to create models that entirely remove bias. If improperly used and trained, ANI can further ingrain our biases into our work, systems, and culture. We've even seen bug bounties dedicated to removing AI bias, underscoring these challenges. Complete dependence would be problematic unless we can identify how to build more robust AI systems that mitigate human bias.
It's Foolish to Rely Fully on AI for Cybersecurity
The growing threat landscape, diversification of attack vectors, and highly resourced cybercriminal groups necessitate a multipronged approach that leverages the strengths of both human and machine intelligence. A survey conducted in 2020 found 90% of cybersecurity professionals believed cybersecurity technology is not as effective as it should be and is partially responsible for the continued success of attackers. Fully trusting AI will only exacerbate many organizations' already significant overreliance on these ineffective automation and scanning tools, and vulnerability reports generated by AI with "confidently wrong" false positives can even add friction to remediation efforts.
Technology has its place, but nothing will compare to what a skilled human with a hacker mindset can produce. The type of high-severity vulnerabilities found by many hackers requires creativity and a contextual understanding of the affected system. A vulnerability report written by ChatGPT compared with one developed end to end by a hacker demonstrates this gap in proficiency. When tested, the former's report was repetitive, lacked specificity, and failed to provide accurate information, while the latter offered full context and detailed mitigation guidance.
AI Can Supercharge Your Security Team
There is a middle ground. Artificial intelligence can accomplish tasks faster and more efficiently than any one person. That means AI can make work much easier for cybersecurity professionals.
Ethical hackers already use existing AI tools to help write vulnerability reports, generate code samples, and identify trends in large data sets. The diverse skill set of the hacker community fills the capability gaps of AI. Where AI really helps is providing hackers and security teams with the most critical component for vulnerability management: speed.
With nearly half of organizations lacking the confidence to close their security gaps, AI can play an instrumental role in vulnerability intelligence. AI's ability to reduce time to remediation by helping teams process large data sets faster could supercharge how quickly internal teams analyze and categorize vast swaths of their unknown attack surface.
Narrow AI Could Help Address Major Industry Challenges
We're already seeing governments recognize the potential of narrow AI: The Cybersecurity and Infrastructure Security Agency (CISA) lists AI as one possible vulnerability intelligence solution for software supply chain security. When the daily minutiae of important cybersecurity work is eliminated, the humans behind the technology are freed to pay closer attention to their attack surface, remediate vulnerabilities better, and build stronger strategies to defend against cyberattacks.
Soon, ANI could unlock even more potential from the hackers and cybersecurity professionals that use it. Instead of worrying about AI taking their jobs, security professionals should cultivate a diverse skill set that complements AI tooling while also maintaining a keen awareness of its current limitations. AI is far from replacing human thought, but that doesn't mean it cannot help create a better future. For an industry with a significant skills gap, AI could make all the difference in building a safer Internet.
About the Author
You May Also Like
A Cyber Pros' Guide to Navigating Emerging Privacy Regulation
Dec 10, 2024Identifying the Cybersecurity Metrics that Actually Matter
Dec 11, 2024The Current State of AI Adoption in Cybersecurity, Including its Opportunities
Dec 12, 2024Cybersecurity Day: How to Automate Security Analytics with AI and ML
Dec 17, 2024The Dirt on ROT Data
Dec 18, 2024