Company's experience highlights the tightrope tech organizations walk when integrating AI into their products and services.

3 Min Read
Zoom video conference app icon on a mobile device and desktop screen
Source: Ink Drop via Shutterstock

Zoom says it will walk back a recent change to its terms of service that allowed the company to use some customer content to train its machine learning and artificial intelligence models.

The move comes after recent criticism on social media from customers who are concerned about the privacy implications of Zoom using data in such a manner.

Backing Down on Data Use Plans

"Following feedback, Zoom made the decision to update its Terms of Service to reflect Zoom does not use any of your audio, video, chat, screen sharing, attachments or other communications-like Customer Content (such as poll results, whiteboard and reactions) to train Zoom or third-party artificial intelligence models," a spokeswoman said in an emailed statement. "Zoom has accordingly updated its Terms of Service and product to make this policy clear."

Zoom's decision — and the reason for it — is sure to add to the growing debate about the privacy and security implications of technology companies using customer data to train AI models.

In Zoom's case, the company recently introduced two generative AI features — Zoom IQ Meeting Summary and Zoom IQ Team Chat Compose — that offer AI-powered chat composition and automated meeting summaries. The terms of an updated service policy that the company announced earlier this year gave Zoom the right to use some customer data behind these services for training the AI models — without needing customer consent.

Specifically, Zoom's policy gave the company a "perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable" right to use customer data for a wide range of purposes including machine learning, artificial intelligence, training, and testing. It also allowed Zoom the unbridled right to do virtually anything with the data including to "redistribute, publish, import, access, use, store, transmit, disclose" the data.

After customers pushed back on social media Zoom initially revised its policy earlier this month to give customers the right to opt out of having their data used for AI training. "Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent," the company said.

Delicate Balance

On August 11, the company again revised its terms of service, this time to scrub virtually all references to the use of artificial intelligence. The newly revised policy still gives Zoom all "rights, title, and interest" to a lot of service generated data including telemetry data, product usage data, and diagnostic data. But the company will not user customer content to train AI models.

Zoom's experience highlights the delicate balance tech companies must strike between innovation and user trust when integrating AI into their products and services. Numerous technology companies have been using customer data for years to improve user experiences and introduce new features and functions, says Shomron Jacob, head of machine learning at "Data is often called the "new oil" in the digital age because of its invaluable role in training and refining AI models to improve user experiences, functionalities, and new features," Jacob says. "Companies like Google, Facebook, and Amazon have long used user data to tailor their services and improve their AI algorithms."

However, given the increasing scrutiny of the privacy, security, and ethical implications surrounding AI, there's a rising expectation for transparency and user consent, he says. While companies will likely continue to use customer data like they have been, there is going to be increased pressure on them to provide clear user opt-outs, to anonymize data and to ensure that personal and sensitive information remain protected.

"Moreover, regulatory frameworks like [the] GDPR in Europe and CCPA in California set data collection and usage standards," Jacob says. "As these regulations become more stringent and widespread, tech companies must navigate the dual challenges of leveraging user data for AI improvements while ensuring strict compliance and safeguarding user trust."

About the Author(s)

Jai Vijayan, Contributing Writer

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year career at Computerworld, Jai also covered a variety of other technology topics, including big data, Hadoop, Internet of Things, e-voting, and data analytics. Prior to Computerworld, Jai covered technology issues for The Economic Times in Bangalore, India. Jai has a Master's degree in Statistics and lives in Naperville, Ill.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like

More Insights