Sponsored By

ChatGPT promises to transform all sorts of corporate business functions, but your business needs to be prepared to address the new risks that come with it.

August 7, 2023

4 Min Read

By Matt Kelly, Writer and Contributor, Hyperproof

The risks around ChatGPT are already here — and compliance officers need to prepare for battle against them immediately.

In many ways that's great; ChatGPT could be a godsend for tasks such as translation, content writing, coding, and more. But the security, compliance, and business risks that come along with such a powerful technology are just as far-reaching. Compliance officers have an enormous amount of work to do if their company wants to avoid being overwhelmed.

What Risks Does ChatGPT Create for Companies?

The most immediate challenge for compliance officers will simply be to understand what all those ChatGPT risks are. They'll come in different forms and from all directions. For example, you're likely to encounter:

  • Internal security risks as employees use ChatGPT or similar applications to do tasks like writing software code.

  • External security risks because attackers will use ChatGPT to write malware, false business emails, more convincing and grammatically correct phishing attacks, and similar threats.

  • Compliance risks if employees use ChatGPT in ways that might violate regulatory standards.

  • Operational risks because, as wondrous as ChatGPT is, it still gets many basic facts wrong.

  • Strategic risks as your company and your competitors also search for the opportunities that ChatGPT brings.

To harness the power of ChatGPT as fully and wisely as possible, you could assemble a cross-enterprise group that can work its way through the above matrix — defining new risks along the way and logging those risks in a risk register.

What Policies Should You Have in Place to Fight ChatGPT Risk?

Once you develop a list of risks that ChatGPT could bring to your business, the next step is to implement policies to reduce those risks by either adopting entirely new policies or (more likely) updating existing policies so that they still work in the era of ChatGPT.

You'd also need to consider the privacy obligations you have with customer data, and then tailor your ChatGPT usage policies to maintain your privacy compliance. You'll need to understand your existing compliance or security risks and your current policies to manage those issues. Then, you'll need to adopt new ChatGPT-oriented policies to address the new risks you've identified — and you'll need to document all that effort to have an audit trail at the ready if auditors, regulators, business partners, or the public ever asks for it.

Generative AI Is Here to Stay. What's Next for CISOs?

Generative artificial intelligence (AI) is here to stay, and eventually it will become a tool used across the enterprise. This means that assessing its risks and responding with new policies is only the beginning. Ultimately, chief information security officers (CISOs) will need to work with senior management and the board to govern how this technology is woven into everyday operations.

Several implications flow from that point. First and most practically, CISOs will need to deploy governance frameworks for artificial intelligence. The good news is that several such frameworks already exist, including one published by NIST earlier this year and another released by COSO in 2021. Neither is geared toward ChatGPT specifically, but they do help CISOs and other risk managers understand how to start building processes to govern ChatGPT or any other generative AI app that comes along.

More good news is that governance, risk, and compliance (GRC) tools already exist to help you put those frameworks to use. The basic exercise here is to map the AI frameworks' principles and controls to those of other risk-management frameworks you might already use and to controls that already exist within your enterprise. Then you can get on with the work of implementing new controls as necessary and creating an audit trail to show your work.

In the final analysis, senior management and the board will be the ones who decide how to use ChatGPT within your enterprise because ChatGPT remains only a tool to help people achieve their objectives. The CISO's role is more about how to use those tools in a risk-aware manner and meet regulatory obligations at the same time.

Then again, with all the risks and rewards generative AI promises, that's plenty enough work already.

About the Author

Mike Kelly

Matt Kelly is a writer and contributor to Hyperproof.io, a SaaS platform that empowers compliance, risk, and security teams to scale their workflows. Kelly was named as 'Rising Star of Corporate Governance' by Millstein Center for Corporate Governance in inaugural class of 2008; and named to Ethisphere's "Most Influential in Business Ethics" list in 2011 (no. 91) and 2013 (no. 77). In 2018, he won a Reader's Choice award from JD Supra as one of the Top 10 authors on corporate compliance.

Read more about:

Sponsor Resource Center
Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights