ChatGPT Could Create Polymorphic Malware Wave, Researchers Warn
The powerful AI bot can produce malware without malicious code, making it tough to mitigate.
![ChatGPT displayed on a mobile phone with OpenAI logo in background ChatGPT displayed on a mobile phone with OpenAI logo in background](https://eu-images.contentstack.com/v3/assets/blt6d90778a997de1cd/blt5b95180e6b4674b8/64f156de90b8a54284e7f61e/chatGPT_Greg_guy_Alamy.jpg?width=1280&auto=webp&quality=95&format=jpg&disable=upscale)
The newly released ChatGPT artificial intelligence bot from OpenAI could be used to usher in a new dangerous wave of polymorphic malware, security researchers warn.
One of the many spectacular tricks ChatGPT has been able to pull off is writing highly advanced malware that actually contains no malicious code at all, making it difficult to detect and mitigate, researchers at CyberArk explained in its recent threat research report.
The CyberArk team also detailed how the chatbot can be used to both generate injection code, as well as mutate it.
This new wave of cheap and easy ChatGPT polymorphic malware is something cybersecurity professionals should pay attention to, the analysis added.
"As we have seen, the use of ChatGPT's API within malware can present significant challenges for security professionals," the report said. "It's important to remember, this is not just a hypothetical scenario but a very real concern."
About the Author(s)
You May Also Like
CISO Perspectives: How to make AI an Accelerator, Not a Blocker
August 20, 2024Securing Your Cloud Assets
August 27, 2024