Zoom clarifies user consent requirement when training its AI
Changes in the terms of service (TOS) of the Zoom video-conferencing software have caused some turmoil. Since the pandemic, Zoom (Video Conferencing) has become a household name. Zoom came up as the big winner in the video conferencing struggle that enabled us to work from home. Now that things are more or less returning to a new normal, this has also had an impact on their success. But the recent uproar about their TOS could turn out to be a bigger blow.
The strange thing should be that the offending bits of the changes were effectuated in March of 2023. But nobody noticed until August when people started posting and discussing a portion of Zoom’s TOS. They found that Zoom claimed the right to access, use, collect, create, modify, distribute, process, share, maintain, and store Service Generated Data, including for the purpose of product and service development, marketing, analytics, quality assurance, machine learning or artificial intelligence (AI).
For a better understanding, you will want to know that in May, Zoom announced a collaboration with Anthropic, an artificial intelligence company that conducts research into AI safety and develops tools based on that work. The AI called Claude is intended to be integrated into the Zoom platform.
After all the uproar about it, Zoom changed its Terms of Service to reflect that Zoom will require user consent to use content for training artificial intelligence.
“Notwithstanding the above, Zoom will not use audio, video, or chat Customer Content to train our artificial intelligence models without your consent.”
In a blogpost, Zoom explains that they updated the TOS (in section 10.4) to confirm that they will not use audio, video, or chat customer content to train the AI models without your consent. And that the section about training artificial intelligence only concerned certain information about how customers in the aggregate use their product. They claimed to only do this to improve the product—not to spy on users.
The explanation makes a lot of sense, but wouldn’t it have been easier if they’d said that in the first place? From the way the TOS was worded, I would have guessed that that’s what they wanted us to think and not what they actually meant.
Unfortunately, they are not alone. Many software companies have their legal documents and agreements drawn up by professionals, that do not care whether their products can be read by ordinary people. As long as the legal content is correct and covers all angles, it’s all good in their point of view.
For that reason, it happens a lot that TOS, EULA’s (End User License Agreements), Privacy Policies, and privacy agreements do not get read in full. And even if we do, some of them look like they are designed not to be understood even if we take the trouble of reading through all of it.
If you don’t believe me, have a look at the Zoom TOS. If you have no trouble understanding what it says there, you are probably a lawyer specialized in corporate law. Even now that we have summarized it for you, it still looks like a major case of letter soup designed to make your eyes roll.
What most of these pieces of text have in common is:
- You are supposed to have read them once you use the software, be it by putting a checkmark at the bottom of an endless text, or by simply proceeding to use the software
- They protect the rights of the issuer
- They restrict your usage
- They explain what the issuer can do with your information and content
- They are written by and for lawyers
- They often favor length and complexity
But most of the time we view them as something that stands in the way of our goal, which is to play the game, use the software, or start working. This has been a known problem for many years, even so much so that a friendly programmer set out to work on a solution. If you’re interested in what a EULA has to say, but no time or inclination to read through all of it to find the important parts, give Eulalyzer a try. It’s free for personal use.