Breaking cybersecurity news, news analysis, commentary, and other content from around the world, with an initial focus on the Middle East & Africa and the Asia Pacific
Q&A: Generative AI Comes to the Middle East, Driving Security Changes
The influx of generative AI could cause security leaders to learn new skills and defensive tactics.
November 12, 2023
The adoption of generative AI (GenAI) in Middle East markets is on the rise with the release of the Arabic large language model (LLM) Jais this past summer, and ChatGPT creator OpenAI announcing a partnership with the Abu Dhabi government.
The timing is appropriate for a discussion on the subject, and at the upcoming Black Hat Middle East conference, Srijith Nair, CISO of Careem, will lead a panel on GenAI in the region: "Defense Against the Dark Arts: Generative AI and Enterprise Risk."
Dark Reading sat down with Nair to discuss the security elements of the introduction of GenAI, from both the attack and defense perspectives.
Srijith Nair, CISO, Careem
Dark Reading: How much do you think generative AI is a business issue, or is it something happening in society that is slowly "invading" business and ultimately cybersecurity?
Srijith Nair: Generative AI is a wider societal phenomenon and, as such, is impacting several aspects of our life. Business, and ultimately cybersecurity, is being affected as an extension of that societal impact. You can see disruptions already across various fields (arts, coding), and cybersecurity is not exempt from the impact of this shift. The jury is out on whether this is an evolution or revolution — time will tell.
DR: How well do you think cybersecurity is keeping up with the trend of Generative AI?
SN: It's going to impact the cybersecurity landscape in multiple ways, from enabling fraud to making it easier to conduct phishing attacks against specific individuals. On the flip side, the technology enables wider tooling capabilities for security services. Writing secure code is getting easier through the judicious use of the AI-based capabilities of coding platforms.
DR: We’ve heard that attackers can benefit from it and use it to better craft attacks and, specifically, phishing messages. Can the defense side keep up?
SN: CSOs have to find ways to enable and adapt to new kinds of technology innovation. One needs to be able to come up with an approach that allows people to use these tools but use them securely — and that's a very interesting challenge at this point in time.
Generative AI brings with it a lot more new vectors and threats, but it also gives us a lot more tools. These tools will not only enable us to counter the new risks but also enable us to shift left more aggressively — this makes it interesting for security practitioners because now you're able to tell your engineering teams how to write code securely, enabling your SOC teams to be more proactive and scale better, etc. People won't have to go out of the way to do things securely; it becomes part of their ready-to-use arsenal.
DR: Talk of machine learning and AI has been around for the best part of the last decade, so is generative AI just adding a lot of complexity?
SN: That is indeed true. Machine learning and its models are not new at all. The models, typically categorized as supervised, unsupervised, semi-supervised, or reinforcement learning, have unique characteristics and applications. However, these techniques traditionally and primarily focus on recognizing patterns and making predictions rather than generating new, original content.
Generative AI goes one step further. These systems not only recognize patterns but can then generate new content that mimics the data it was trained on. The biggest shift probably though is that generative AI seems to have democratized the use of AI. The use cases being closer to a casual user, generative AI has found a strong foothold in our day-to-day life.
DR: Is there enough capability to learn about how to use these technologies from a security perspective, how they can be used and what can be done with them?
SN: You need to be able to train your data and AI teams how to do things securely, but at the same time as a security team you need to upskill your knowledge as well because as a CSO you are the controlling function. You are expected to spot whether teams are doing the right thing — so you need to know enough to then challenge them to say "Hey, is this right?"
A lot of the time, it ends up being about upscaling your security team, unless they have really been hands-on on the stuff, which I'll be very surprised by. A lot of times, it's also because the last two years have been so fast moving when it comes to generative AI, I would be very surprised that any security in your team out there can claim that they are on top of it completely.
DR: Could AI be the savior for the security staffing issue we’ve been talking about for so many years?
SN: AI would definitely be a great help in scaling and automating the security controls to the level that is necessitated, by the increasing complexity of the systems being protected, the heterogeneous environments involved, and the automation and scale used by threat actors. However, calling it a "savior" or a silver bullet would be a step too far, at least at this point in time.
Read more about:
Black Hat NewsAbout the Author
You May Also Like
Unleashing AI to Assess Cyber Security Risk
Nov 12, 2024Securing Tomorrow, Today: How to Navigate Zero Trust
Nov 13, 2024The State of Attack Surface Management (ASM), Featuring Forrester
Nov 15, 2024Applying the Principle of Least Privilege to the Cloud
Nov 18, 2024The Right Way to Use Artificial Intelligence and Machine Learning in Incident Response
Nov 20, 2024