Judge rules Anthropic is allowed to train AI on copyrighted materials

Image : Anthropic

In the rapidly growing generative AI field, one of the gray areas is whether training AI models using copyrighted materials without the permission of the copyright holders is a violation of copyright. Anthropic, which is the company behind Claude the AI chatbot, has been sued by a group of writers. A US federal judge has now ruled that AI-training is covered by “fair use” laws, and is therefore legal. Engadget reports.

Fair use is defined by US law as allowing copyrighted materials to be used when the result is “transformative”. That is, if the resulting work is not derivative or a replacement for the original, but is something new. This is the first judicial review of its kind and the decision may serve as a precedent for future cases.

The judgment notes that plaintiff authors can still sue Anthropic if they feel the company has committed piracy. The ruling states that Anthropic illegally downloaded (pirated), over 7 million books, without paying. It also kept these books in its internal library after deciding to not use them to train or retrain the AI model moving forward.

According to the judge, “Authors argue Anthropic shouldn’t have paid for these library copies.” This order agrees.” PC for Alla ( ) was translated from Swedish and localized.

Viktor Eriksson, AuthorContributor to PCWorld

Viktor writes for our sister sites M3 and PC For Alla. He is passionate about tech and is always up to date with the latest product launches and hot topics in the consumer technology industry.

www.aiobserver.co

More from this stream

Recomended