How Businesses Can Get Ready for AI-Powered Security Threats
Organizations need to take steps now to strengthen their cyber defenses.
Anxiety over an artificial intelligence tool called ChatGPT is spreading across a wide range of sectors, from education to business to cybersecurity circles. Numerous articles have shown ChatGPT's efficiency in creating phishing emails, as well as passing medical and business school tests. Its ability to write, speak, and answer queries across a wide range of subjects as competently as many humans do, as well as its ability to find vulnerabilities in computer systems, has raised legitimate concerns over how it may be used to create effective phishing campaigns on a large scale.
While today it's a toy, a parlor trick that people take out to show how much AI has improved, businesses and government institutions should be worried about what's going to happen in two to five years, as AI models continue to improve and bad actors take advantage of what it can do. Organizations need to take steps now to strengthen their cyber defenses, against both current threats and what's lurking around the corner.
AI's Versatility Creates Risks
ChatGPT, created by OpenAI, has been available for queries since November 2022, in an open-ended beta testing period. OpenAI, a research and deployment company that pursues innovations in AI, says it created the chatbot to interact in a conversational way, study user feedback, and learn its own strengths and weaknesses. It's been used to explore scientific subjects, help write a poem or a song, and even apply for a job. ChatGPT does make mistakes. The coding platform StackOverflow temporarily banned ChatGPT because its answers to questions were often incorrect, deciding that posting those answers would be "substantially harmful" to StackOverflow users. But it is learning and improving.
The Next Stage of AI Threats
The most immediate cybersecurity concerns over ChatGPT are that it can give neophyte cyberattackers the ability to write phishing emails, exploit buffer overflows, and carry out other basic cyberattacks. But in a few years, these threats will become much more serious.
AI tools will make it easier for malicious insiders or cybercriminals who gained brokered access to engineer and manipulate intracompany dialogue, sending precisely targeted phishing emails that look like legitimate requests from a person inside the company.
What Businesses Can Do to Protect Themselves
There are several steps businesses can take to adopt a security-first culture and protect themselves from the kind of threats AI poses, now and in the future:
Make sure the business leans toward skepticism. People at every level of a company should question what they see in email or any other communication channels. Phishing is so pervasive because it has so often worked, accounting for 73% of social engineering attacks in North America, according to Verizon's "2022 Data Breach Investigations Report." Employees should be trained to look at any email, Slack invitation, or other communication with a critical eye. They need to be aware of the signs that it's fraudulent.
Deliver continuous, real-time cybersecurity training. Almost every organization has a cybersecurity training program that their employees must take annually. Given the number of breaches we've seen based on phishing attacks, it's clear this is not enough. Organizations need to help employees identify phishing attacks in real-time, pointing out as it happens when employees click on fraudulent links or download privileged information onto a thumb drive. For the sake of productivity, employees try to find workarounds, and cybersecurity training needs to happen in the moment to remind employees why protocols are there in the first place.
Establish some Internet borders to reduce unnecessary use. Workplaces already do this to some extent, such as by blocking offensive websites or forbidding Internet use that could put company data in jeopardy. If they have not done it already, businesses can establish a written policy detailing acceptable and forbidden Internet use. Programs are available that will limit Internet use to approved websites, and routers can be used to block sites. Tracking and logging Internet use also can act as a deterrent.
Improve corporate security policies and actually enforce them. Security transformation does not happen in days. It happens over months and years, requiring a cultural change in how everyone in the organization thinks about cybersecurity. The best practices in security today can be effective, but only if fully implemented and followed. As with other security steps, businesses should communicate consistently about security, reminding staff of what's expected from them.
Question current standard practices. One of the most common explanations used in IT has always been, "We've always done it that way." This is the worst explanation possible for any security practice. An essential component of a security-minded culture is a willingness to change processes and implement new tools to keep up with the ever-changing cyber threat landscape. Be ready to consider more secure and efficient modes of cybersecurity protocol.
Building a Culture Around Security
Many organizations begin to see greater success against advanced AI threats when they empower their workforce, which begins with strengthening communication between IT, HR, security teams, and employees about anything and everything concerning risk, data privacy, Internet use, and more. In today's threat environment, security is everyone's responsibility.
About the Author
You May Also Like
Cybersecurity Day: How to Automate Security Analytics with AI and ML
Dec 17, 2024The Dirt on ROT Data
Dec 18, 2024