Managing Your GenAI Einstein Risks Intelligently

Organizations that can mitigate the risks inherent in Salesforce's powerful new Einstein Copilot have a lot to gain.

May 20, 2024

4 Min Read
A screen showing technology icons such as keys, 0 and 1, and wireless signal, in hexagons.
SOURCE: ALEKSEY FUNTAP VIA ALAMY STOCK PHOTO

By Hananel Livneh, Head of Product Marketing, Adaptive Shield

The generative artificial intelligence (GenAI) race touched off by OpenAI's launch of ChatGPT is continuing at full speed, as tech leaders rush to implement these revolutionary AI capabilities in their software-as-a-service (SaaS) applications.

Salesforce recently released Einstein Copilot, a GenAI virtual assistant. Built on a large language model (LLM) that uses natural language processing, Einstein automates tasks across multiple Salesforce clouds and applications, enabling cross-department processes and far more cohesive operations.

Einstein can scour data to determine which resources would be best received by specific customers or summarize a doctor's notes and update medical records. Like other AI tools, it can recognize patterns and analyze large pools of data to help team members identify trends or make decisions. It can even compose blogs, emails, or marketing content targeted to a specific customer or target market.

Salesforce embedded the groundbreaking application with the "Einstein Layer of Trust." Sensitive data, including personally identifiable information (PII), payment card industry (PCI) information, and protected health information (PHI), is masked, and all data flowing through Einstein is encrypted within the Einstein Trust Layer. Salesforce has also stated it will not use customer data to train the LLM or sell that data. While no one questions the power of Einstein, some organizations are understandably concerned about new risks and SaaS security issues stemming from the GenAI capabilities.

What Are Einstein Copilot Security Risks?

Protecting data within Salesforce is always a shared responsibility, and GenAI is no different. Salesforce is responsible for securing the infrastructure, platform, and services that enable AI, while customers are responsible for securing their data.

Overarching access is one risk that companies must mitigate. Einstein will use any piece of data under the user's access rights. Users who are overpermissioned could find themselves accessing data they shouldn't be able to see. This is especially concerning when external users have access to Salesforce and Einstein.

A second risk comes from the materials Einstein creates. LLMs like GPT4 are great at creating content, often surpassing human capabilities. However, as users increase their reliance on and trust in Einstein, it's likely they will hit "send" on proposals or other materials after just a cursory glance and not notice they have leaked content that includes confidential information to one of their client's competitors.

Mitigating Einstein's Security Risks

The benefits inherent in Einstein Copilot are significant, and organizations that mitigate these risks have much to gain.

Enforcing the principle of least privilege (POLP) is a good place to start for internal and external users. Einstein Copilot inherits the same access and permissions as the user; limiting user entitlements within Salesforce will automatically limit Einstein's access. User permission sets and profiles should be regularly reviewed and refined to ensure that they provide the right level of access.

Salesforce administrators should also look into limiting access to Einstein. Einstein permissions should be granted only to authorized individuals. They should continue to monitor activity through the Einstein Copilot detail page and event logs to identify and protect against any misuse.

Employee training is a vital component in protecting against data leakage. Employees should be educated on proper Einstein usage and best practices and — perhaps most importantly — ensure they understand the risks associated with blindly sharing content produced by Einstein.

Overseeing GenAI Activities to Mitigate Data Leakage Risks

Salesforce is just one example of a SaaS application that has introduced a GenAI assistant into its platform. Microsoft, GitHub, Zendesk, and others have integrated GenAI tools into their applications. Companies may be wary of Generative AI and the risks it introduces, but that won't stop employees from experimenting with tools that will make their jobs easier. A recent Salesforce study found that more than half of GenAI adopters are using unapproved tools at work.

To keep up, organizations must develop a set of AI policies and strategies. Investing in monitoring tools, such as SaaS security posture management (SSPM), is essential in overseeing GenAI activities and mitigating potential data leaks, ensuring that these technologies bring value to the company while managing risks.

About the Author

Hananel Livneh is Head of Product Marketing at Adaptive Shield. He joined Adaptive Shield from Vdoo, an embedded cybersecurity company, where he was a Senior Product Analyst. Hananel completed an MBA with honors from the OUI, and has a BA from Hebrew University in Economics, Political science and Philosophy (PPE). Oh, and he loves mountain climbing.

Read more about:

Sponsor Resource Center
Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights