Top 5 Myths of AI & Cybersecurity
Organizations looking to maximize their security posture will find AI a valuable complement to existing people, systems, and processes.
COMMENTARY
The global rise of increasingly sophisticated cybercrimes creates daily challenges for the cybersecurity industry, as security professionals grapple with new and evolving attacks, complex IT architecture, and the integration of artificial intelligence (AI) into nefarious actors' tactics, techniques, and procedures (TTPs). As a result, cybersecurity practitioners feel a sense of urgency to stay at the forefront of technological advances to defend against a growing arsenal of exploits.
In this environment, a methodical approach to the intentional use of AI for cybersecurity protocols will help avoid falling prey to the hype and myths of groupthink, such as these five myths of AI and cybersecurity:
1. You'll fall behind if AI isn't your primary solution for cybersecurity.
While AI can enhance cybersecurity tasks by analyzing large volumes of data to recognize nefarious identifying patterns, it can also be susceptible to false positives and negatives, be trained on bad data, or act in unexpected ways. Many common cyber threats can still be effectively mitigated through basic security practices like strong passwords, regular patching, and employee awareness about social engineering. Incorporating AI into security solutions is not a license to abandon proven tools and replace established best practices. Sophisticated attackers will find ways to evade AI-based detection, so adopting AI cannot be the only solution for security and defense. Time-tested security practices like organizational culture, leadership commitment, and employee training are still critical fundamentals of a robust organizational risk management practice.
2. Threat actors are using AI, which will lead to an exponential increase in cyberattacks.
AI is undoubtedly a powerful tool that can be used for both good and bad — it's not a guaranteed game-changer for threat actors. The cybersecurity landscape is a dynamic battleground, and the interplay between AI-powered attacks and defenses is complex. AI may increase the speed and sophistication of certain attacks, such as better phishing attempts; however, it has not fundamentally changed the attack vectors or the attack surface within organizations. A mature security posture using foundational defensive practices continues to be a sound approach to reducing risk and thwarting attacks.
3. AI-powered tools are better, and every tool needs an AI feature.
AI-aided tools can provide powerful additions to a defensive infrastructure with quicker, more comprehensive data discovery. Left unchecked, AI is capable of producing sometimes comical but altogether bad results, such as the Google Overview suggestion of adding glue to homemade pizza sauce, or the McDonald's AI drive-through putting bacon on ice cream. In cybersecurity, erroneous results have serious consequences, including loss of trust, excessive costs, and endangering human safety. The potential benefit of any AI-powered resource must be balanced against the potential cost of error. The safest use of AI is within a layered defense model, which allows for verification and redundancy in protective solutions.
4. You can only combat AI with AI.
AI is a valuable tool in the cybersecurity quiver, but it's not impenetrable. Like any other defensive technique, attackers can exploit vulnerabilities in AI models or develop exploits to bypass AI defenses. Combating AI threats requires a multilayered approach that combines AI with human expertise, traditional security measures, and a strong focus on prevention, detection, and response. Using AI to combat AI may also raise ethical and legal concerns, particularly around issues like autonomous decision-making, accountability, and potential for misuse. These concerns must be carefully considered before implementation, to ensure responsible, ethical use of AI in cybersecurity. By leveraging the strengths of both humans and AI, organizations can build a more resilient and effective cybersecurity posture.
5. AI is going to replace your job.
Companies who are hiring fewer entry-level people for cybersecurity roles are not doing so because AI has eliminated those jobs; rather, they are limited by shrinking budgets and a need to hire people who can make an immediate impact when they come onboard. AI excels at automating repetitive tasks, analyzing vast amounts of data and identifying patterns, but human intuition and judgment is still needed to interpret those insights, make critical decisions, and adapt strategies to combat evolving threats. At its best, AI allows humans to focus on creative and strategic thinking to deliver complex problem-solving. The prominence of AI has created new jobs like AI engineers and AI ethicists to manage AI systems. AI-related roles, along with roles in cybersecurity, will continue to emerge and evolve. AI won't replace people, but people who know how to work with AI may replace people who don't.
Effective cybersecurity requires a multilayered approach that combines technology, processes, and people. Solely relying on AI for cybersecurity can create a false sense of security, and AI-based cybersecurity solutions can be costly and complex. While AI is transforming the workforce, it's unlikely to lead to widespread job displacement. Instead, it likely will change the nature of work and require adaptation and development of new skills.
Progressive companies will see the value of investing in training programs and implementing policies that support workforce transition. Organizations looking to maximize their security posture with enhanced value, productivity, creativity, and innovation will find AI a valuable complement to existing people, systems, and processes.
About the Authors
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024