Many online said they were halting the use of Zoom over terms that seemingly allowed the platform to scrape user data to train AI models.
Video-conferencing platform Zoom has updated its terms of service after widespread backlash over a section concerning AI data scraping, clarifying that it won’t use user content to train AI without consent.
In an Aug. 7 post, Zoom said its terms of service were updated to further confirm it would not use chat, audio, or video content from its customers to train AI without their express approval.
Over the weekend, a number of Zoom users threatened to stop using the platform after discovering terms that purportedly meant the firm would use a wide array of customer content to train AI models.
In the most recent post, Zoom said the AI-related terms were added in March, and reiterated it will not use any customer data for AI training without consent. The terms have now been updated to include a similar clarification:
“Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.”
Zoom’s post explains its AI offerings — a meeting summary tool and a message composer — are opt-in with account owners or administrators able to control the enablement of the tools.
Before Zoom added clarification to its terms, X (Twitter) users posted their concerns about their AI terms, with many calling for a boycott of Zoom until the terms were updated.
Concern arose over terms where users consented to Zoom’s use, collection, distribution and storage of “Service Generated Data” for any purpose including training AI and machine learning models.
Further terms allowed for Zoom’s right to use customer-generated content for — among other uses — machine learning and AI training and testing.
Related: The absurd AI mania is coming to an end
Other tech companies have also recently updated privacy policies to make room for data scraping to train AI. Google’s policies were updated in July allowing it to take public data for use in AI training.
Meanwhile, there is growing concern over tech firms’ use of AI and possible privacy implications. In June, European Union consumer protection groups urged regulators to investigate AI models used in chatbots such as OpenAI’s ChatGPT or Google’s Bard.
The groups were concerned over disinformation, data harvesting and manipulation generated by the bots. The EU passed the AI Act on June 14 to take effect within the next two to three years and gives a framework for AI development and deployment.