How WeTransfer sparked fears about AI training using user data

Dutch File-sharing Service WeTransfer has been under fire since users noticed sweeping changes to its terms of services that appeared to allow the company to train AI models using their uploaded files.

Although the company has removed the controversial language from its terms of service, users are still outraged. Here’s the latest and why it’s important. What changed on WeTransfer?

WeTransfer customers discovered this week that its policy had been updated with a clause that granted it a perpetual license to use user-uploaded material, including “improving machine-learning models that enhance content moderating.” These changes were set to take effect on August 8, according to WeTransfer. The language was so vague that many users, including Sarah McIntyre and Matt Lieb (a comedian) felt it allowed WeTransfer to use their files or sell them to train AI.

What is the acceptable level of conduct? @WeTransfer? You’re not free, I *pay* for you to move my large artwork files.

You DON’T get paid to use my artwork to train AI, print, sell, and distribute it, or to set yourself up as my commercial rival, using my work. pic.twitter.com/OHPIjRGGOM

— Sarah McIntyre (@jabberworks) July 15, 2025 (19659011)

The of EU tech

The latest rumblings on the EU tech scene

A story from our wise old founder Boris and some questionable AI artwork. Every week, it’s in your inbox for free. Sign up today! How did WeTransfer react?

WeTransfer scrambled on Tuesday afternoon to douse the fires. It said in a press release it does not use user content for AI training, nor does it share or sell files with third parties. WeTransfer says it has considered using AI in the future to “improve content moderating,” but that this feature “hasn’t been built or implemented in practice.”

WeTransfer also amended its terms of services, removing all mentions of machine-learning. The revised version states users grant the company a “royalty-free license” to “operate, develop, and improve the service.” Why are users so worried?

WeChat has joined a growing number of companies that have been criticised for training machine-learning systems using user data. After public outcry, other companies have recently clarified or retracted similar AI-related policies, including Adobe, Slack,and Dropbox. All of these incidents reflect a wider frustration with copiesright consent and

privacy in the AI era. They also point to issues around trust between users and tech companies. WeTransfer, which has been around for a long time, has marketed itself as

a privacy-conscious and creatively-friendly file-sharing[service]. It’s not surprising that the vague language around AI and sweeping licensing rights felt like betrayal to users, especially for artists and freelancers who were worried their work could quietly be fed into machine-learning models without consent.

Although WeTransder clarified its terms, the damage had already been done for many users. Some users responded to WeTransfer’s official announcement on X by saying that it appeared the service had tested out the waters with broader AI rights, received swift public backlash and then quickly walked back the decision.

WeTransfer will not be the only tech company to face this type of controversy. As AI fever spreads user data is the new fuel.

www.aiobserver.co

More from this stream

Recomended