The Need to Secure AI Use Is Real. Are Organizations Prepared?

Generative AI brings both excitement and anxiety. Learn how to build a multifaceted approach to enable the secure use of AI in your organization.

April 15, 2024

4 Min Read
A screen showing technology icons such as keys, 0 and 1, and wireless signal, in hexagons.
SOURCE: ALEKSEY FUNTAP VIA ALAMY STOCK PHOTO

By Rob Lefferts, Vice President, the Threat Protection organization at Microsoft

Just over a year into its new era, generative AI (GenAI) is already making waves across industries, with many organizations actively engaging or experimenting with this transformative technology. A May 2023 internal Microsoft study indicates that 93% of businesses are either implementing or developing an AI strategy. But there is a blend of excitement and anxiety as the technological advances that drive GenAI innovation and opportunities also bring new security and governance concerns.

The road to AI adoption has leaders feeling uncertain as they navigate through uncharted territory with concerns about sensitive data leakage, harmful or biased outputs, uncertainty of regulations, and challenges protecting custom-built GenAI apps.

Following are some key considerations in developing a strategic approach for embracing AI securely and responsibly.

Block or Adopt AI?

In response to these concerns, a significant portion of security leaders anticipate continued restrictions on AI usage within the workplace. Many security leaders might prefer to delay the use of GenAI, as every new technology they must secure and manage keeps them up at night. But this wave of change will be hard to resist, and it's a mistake to stifle the innovation and productivity that GenAI is bringing.

Inevitably, if we fail to address GenAI security concerns effectively and simply try to ban it in the workplace, users will seek alternative, less secure methods to access and use GenAI apps. For example, users might use GenAI apps through unmanaged devices, over unsecured networks, or by logging in with personal credentials while handling sensitive data.

Instead of trying to block the use of AI, organizations can take a more proactive approach to mitigate and manage risks by gaining visibility into AI use within their environments and implementing corresponding controls. It is about finding the sweet spot of leveraging AI's potential while implementing security and compliance controls to enable secure and responsible adoption.

Securing GenAI Usage: A Multifaceted Approach

While some GenAI apps such as Copilot for Microsoft 365 are designed with built-in controls to adhere to existing privacy, security, and compliance commitments, such as the General Data Protection Regulation (GDPR), it is beneficial for organizations to deploy additional capabilities to further strengthen security and governance for the use of GenAI apps. Specifically, organizations should adopt a strategic approach that addresses three key aspects of securing and governing the use of AI:

  • Discover AI app risk exposure: Robust protection always begins with visibility. To help secure data in the era of AI, it is essential to gain visibility into potential risks associated with AI use. This involves identifying and understanding the sensitive data involved in AI interactions and how users engage with these GenAI apps. Security teams should consider tools that offer comprehensive insights into GenAI use in their environment, including types of sensitive data in prompts and responses, aggregated insights of user activities in AI apps, and detection capabilities for risky activities associated with the use of GenAI apps. For instance, if a GenAI app is used over a risky IP address to access confidential files, an effective security tool can detect and alert security teams on this suspicious interaction. This gives organizations better visibility into ongoing use and the ability to mitigate risk.

  • Protect AI apps and sensitive data: Security teams should also consider designing and implementing controls that include a proactive data security strategy to identify, classify, and protect sensitive data data used with GenAI with with effective encryption and data loss prevention (DLP) controls. These controls should be persistent to protect the data throughout its AI journey. Once implemented, it is critical that security teams are alerted when any sensitive data used in a GenAI app might be part of a cyberattack. Tools such as extended detection and response (XDR) deliver a unified threat investigation and response experience to help security teams quickly understand the full scope of an attack. Security teams should consider providers that integrate data insights and alerts specifically around the use of GenAI apps as part of the XDR experience so that they can prioritize effectively and achieve full visibility across affected asset domains.

  • Govern the use of AI apps: With GenAI being such a new technology, regulations and risk management are rapidly changing. Noncompliant AI use, including generation of harmful, fraudulent, or unethical content, may violate an organization's code of conduct policies or regulatory requirements. Organizations should deploy tools to detect and investigate potential noncompliant usage. Additionally, data life cycle controls are crucial for removing inactive content from GenAI processing to reduce the chance of generating obsolete insights. They are also vital for managing retention and deletion policies for GenAI app interactions, aligning with specific organizational needs.

By adopting a holistic approach to discover, protect, and govern the use of AI, organizations can benefit from the advantages of this transformative technology, including heightened productivity, enhanced creativity, and increased opportunities, all while effectively managing the associated security and compliance risks. It is about striking that delicate balance between innovation and security.

About the Author

Rob_Lefferts.png

Rob Lefferts is corporate vice president of The Threat Protection organization at Microsoft. He leads the team responsible for Microsoft Defender XDR and Microsoft Sentinel products which provide end-to end comprehensive and cohesive Microsoft security experiences and technology for all of our customers. Lefferts holds a BS and MS from Carnegie Mellon in Pittsburgh, PA.

Read more about:

Sponsor Resource Center
Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights