Beyond ChatGPT: Organizations Must Protect Themselves Against the Power of AI

Artificial intelligence-powered threats are already affecting businesses, schools, hospitals, and individuals, and it will only get worse as AI advances.

Fred Kwong, Vice President & Chief Information Security Officer, DeVry University

July 25, 2023

4 Min Read
Head made up out of circuitry -- AI concept art
Source: marcos alvarado via Alamy Stock Photo

Society is on the frontier of new possibilities with artificial intelligence (AI). However, we are also on the brink of AI capabilities that will far outpace and impact cybersecurity defenses for organizations and governments around the globe.

According to Thales' "2023 Data Threat Report," more than half of enterprises lack a formal plan for handling ransomware attacks. This highlights an alarming fact: Most organizations, including public and private businesses, universities, corporations, and government agencies and their data, are vulnerable during this revolutionary time for end users and, unfortunately, threat actors.

The Clock Is Ticking

Technology continues to reach new heights. ChatGPT was introduced as a user-friendly advancement in generative AI just eight months ago. The public has responded with enthusiasm, adoption, experimentation, and varying opinions about AI's ethics and potential impact. Nevertheless, AI's inevitable steps toward improving upon its already impressive capabilities to mimic reality are the profound technological discovery that is coming.

Therefore, the clock is ticking. Savvy defense equal to what's coming cannot arrive soon enough. Scores of confidential data and information stored electronically by organizations and governments lie vulnerable to new threats that will be powered by AI engines. These AI-powered engines can scrape the Dark Web quicker and faster than a human threat actor can, thereby increasing the speed that data found on the Dark Web can be exploited.

AI-powered ransomware is coming and it will involve automation, which could result in a terrifying aftermath. Along with wondering how far AI's capabilities will ultimately go, we are left to ponder how much AI-powered threats will impact cybersecurity's defense efforts.

AI Spells the End of Human Constraints on Cyber Threats

Shifts to the cyber-threat landscape are certain now that AI is here to stay. An AI-powered cyber threat will be capable of finding 20,000 ways to bypass a single vulnerability in an organization's system and learn from itself by constantly modifying its attack vector until it succeeds. This kind of capability will effectively speed cybercriminals' ability to create or weaponize vulnerabilities. Additionally, AI-driven threats will not be halted by things that pause human threat actors — they will not need to stop or sleep.

Moreover, believable, realistic, and more personable phishing attempts will be enabled by AI algorithms that can convincingly imitate a human's voice, appearance, and behavior. Facial recognition is one broadly discussed example of deep learning AI due to the realism of deepfakes, which can be created from digital footprints such as virtual meetings, online videos, and podcasts.

But AI's power to make the most technologically difficult tasks much easier for anyone is what will lead to an increase in threat actors. The pending arrival of new types of cyber threats that can make 24/7 attempts at breaching vulnerabilities from every possible angle, along with more threat actors who can execute nefarious actions without having to be tech-savvy, will ensure ransomware remains a billion-dollar industry.

AI-Powered Threats Require AI-Powered Defenses

Regardless, AI-powered threats must be met with AI-powered defenses. Organizations and governments must be open to new technologies to defend their data; legacy cyber-detection capabilities will not suffice. The current cyber-threat landscape is already experiencing an increase in the volume and severity of ransomware attacks, as 47% of IT professionals said in Thales' recent report. Look at the dire results, such as a hospital in rural Illinois shuttered in late June, partially attributed to a multiweek ransomware attack more than two years before. Or the recent Cl0p hacking campaign after Russian cybercriminals exploited a vulnerability in MOVEit, a widely used software, that impacted businesses; two divisions within the Department of Energy; state governments in Minnesota, Illinois, and Oregon; and over a dozen universities.

In fact, higher education is a frequent target of cyberattacks due to its stockpiles of personal and financial data. In its "2023 State of Ransomware" report, Sophos found the education sector has the highest rate of experiencing a ransomware attack in the last year, with 79% of higher education institutions reporting being hit. In the age of AI, higher education will do well to remain aware of cyberattack methods, increase cybersecurity resources, and leverage new opportunities to educate people about threats.

Prepare Now for a Crisis Later

The worst time to plan for a crisis is when the crisis is already happening. Organizations should continuously evaluate and test their plans for handling and responding to breaches. Non-AI-powered attacks are already having a domino effect in the physical world. University of California San Diego researchers concluded "cyberattacks on hospitals should be considered a regional disaster" after their hospital experienced resource constraints following a breach at a nearby hospital. Against the speed of AI-driven threats, organizations will need to double down on ensuring data resiliency to defeat threats, restore systems, and continue functioning.

It's hard for a lot of organizations to recognize what's necessary in the face of change, especially in the cyber world, as evident by the 51% of enterprises that don't have a formal plan for ransomware. AI will impact cybercriminal activity, creating the need for AI-powered defenses. Organizations will need to conduct a lot of testing to achieve AI-led cybersecurity. Remember, much is at stake. We can't afford to hinder our ability to defend when the time comes.

About the Author(s)

Fred Kwong

Vice President & Chief Information Security Officer, DeVry University

Dr. Fred Kwong has been in the information security and technology field for the past 20+ years in, working in the education, financial, telecommunication, healthcare, and insurance sectors. He is an award-winning thought leader in security and currently works at DeVry University, where he currently serves as the VP and Chief Information Security Officer. He is a member of several advisory boards and is a frequent speaker at national security forums on cybersecurity and information technology, and he is often asked to consult on matters of security and leadership.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights