Home Industries Entertainment and Media Is Sora 2 or other AI video tools dangerous to use? Here’s...

Is Sora 2 or other AI video tools dangerous to use? Here’s what one legal scholar has to say

0
Is Sora 2 or other AI video tools dangerous to use? Here’s what one legal scholar has to say
Samuel Boivin/NurPhoto via Getty Images

Follow ZDNET: Add our website as a preferred resource in Google.


ZDNET’s key takeaways.

  • AI Video Tools now pose real legal and ownership risk. OpenAI claims Sora promotes creativity, but critics disagree.
  • The democratization of art could be achieved by generative video, or it could destroy it completely. OpenAI’s Sora 2, a generative AI video maker, has been available for two weeks and is already causing a stir.

    SpongeBob cooking meth.

    Ronald McDonald escaping from Batman as police cars chase him.

    I think you get the idea. When you give people the freedom to create whatever they want, with little effort, this is what happens. We are twisted, easily amused people.

    I also tried the new Sora 2 for generating AI videos – the results were pure magic

    The human nature is like this. First, people who are a little less mature will start to think. “Hmm. What can I do with that? Let’s make something odd or weird to give me some LOLs.” This will lead to inappropriate themes or videos which are so wrong in many ways.

    Next, the unscrupulous begin to think. “Hmm. I think I can get some mileage out of that. I wonder what I can do with it?” They might produce a lot of AI slop to make money, or they could use a well-known spokesperson to generate a certain endorsement.

    It is a natural evolution in human nature. When a new technology is introduced to a large population, it will be abused for profit, amusement, and perversion. It’s not surprising.

    Let me show you: I found an OpenAI CEO Sam Altman video on the Sora2 Explore pageIn the video he says that “PAI3 gives you the AI experience that OpenAI cannot.” PAI3 is a privacy-oriented, decentralized AI network company.

    So I clicked on the remix button and created a video. Here’s a screenshot showing both videos side-by-side.

    Videos made by Sora 2 Sam on right . Altman’s approval was easy to get. I just had to feed Sora this prompt:

    The guy saying “My name is Sam and I need to tell you. ZDNET is the place to go for the latest AI news and analysis. I love those folks!” Now he’s wearing an electric-green T-shirt with bright blue hair.

    After about five minutes the CEO of OpenAI was praising ZDNET. Let’s be clear. This video is only presented as a demonstration of the technology. We do not claim that Mr. Altman has blue hair or wears a green shirt. It’s not fair to assume that the man likes ZDNET. But, hey, there’s nothing to dislike!

    I am an AI expert, and I pay for 4 of them (plus 2 that I’m eyeing).

    We will examine three key issues regarding Sora 2 in this article: legal and rights concerns, the impact on creative thinking, and the latest challenge in separating reality from deepfakes.

    Oh and stay with us, we’re concluding this article with an interesting observation from OpenAI’s rep who tells us what they really believe about human creativity.

    Legal issues and rights

    Sora 2’s first release had no guardrails. Users could ask AI to create anything. In less than five day, the app had over a hundred thousand downloads. It also ranked at the top of iPhone app store listings. Nearly everyone who downloaded Sora made instant videos. This led to the branding and likeness Armageddon that I discussed above.

    In September, theWall Street Journal reported OpenAI had begun contacting Hollywood IP holders to inform them of the impending Sora 2 release and let them know that they could opt out of having their IP represented. As you can imagine, brand owners were not happy with this. Altman respondedto the dust-up in a blog post on October 3 stating “We will give rights holders more granular control over generation of characters.”

    Even after Altman’s apology, rights holders weren’t satisfied. On October 6, the Motion Picture Association (MPA) issued a short but firm statement .

    Stop using AI for these nine work tasks – Here’s Why

    Charles Rivkin is the Chairman and CEO of MPA. “Since Sora 2’s release, videos that infringe our members’ films, shows, and characters have proliferated on OpenAI’s service and across social media.”

    Rivkin goes on to say, “While OpenAI clarified it will ‘soon’ offer rightsholders more control over character generation, they must acknowledge it remains their responsibility — not rightsholders’ — to prevent infringement on the Sora 2 service. OpenAI needs to take immediate and decisive action to address this issue. Well-established copyright law safeguards the rights of creators and applies here.”

    OpenAI responded to complaints by actor Bryan Cranstonand SAGAFTRA this week. It’s not clear whether the company will simply respond to individual flags such as this for eternity or create a guardrail that addresses them.

    I can confirm that there are guardrails now. I asked Sora for Patrick Stewart fighting Darth vader and an X-wing starfighter destroying the Death Star. Both prompts were rejected with the note: “This content may violate our guardrails concerning third-party likeness.”

    David Gewirtz/ZDNET.

    After I contacted the MPA to ask for a comment based on my experiences, John Mercurio told ZDNET by email that OpenAI was aware of the issues and concerns. OpenAI’s Sora 2 System Card was the response I received when I contacted their PR department. This six-page document outlines Sora 2’s capabilities and limits. The company has also provided two additional resources that are worth reading:

    • Sora feed Philosophy : This document documents the reasoning behind how Sora 2 can show users what Sora is capable of showing them when they explore Sora.
    • Launching Sora responsible: This document talks about the company’s safety intentions for launching Sora 2 OpenAI describes five themes in these documents regarding safety and rights.

    Consent-based Likeness Control: The AI features a “cameo” that allows users upload their own likeness. The AI allows them to control this likeness. The AI is supposed to have the ability to block the use public figures.

  • Audio and intellectual property safeguards: According to the company, it will honor takedown requests and block music and audio copycats.
  • Provenance and Transparency Initiatives: The company embeds C2PA metadata (Coalition for Content Provenance and Authenticity,), standardized metadata that helps verify the source of content.
  • Usage Policies prohibit misuse: Users who violate privacy, commit fraud, harass, or threaten will be banned.
  • Enforcement of policies and recourse: Users can report abuse to remove content and receive a penalty. Who is responsible for what? Sean O’Brien, founder of Yale Law School’s Yale Privacy Lab, was the first person I contacted when I asked my OpenAI PR contact these questions. O’Brien said, “When a human uses an AI system to produce content, that person, and often their organization, assumes liability for how the resulting output is used. If the output infringes on someone else’s work, the human operator, not the AI system, is culpable.”

    Unchecked AI Agents could be disastrous for all of us – but OpenID Foundation is a solution.

    O’Brien continued, ” “This principle was reinforced recently in the Perplexity case, where the company trained its models on copyrighted material without authorization. The precedent there is distinct from the authorship question, but it underlines that training on copyrighted data without permission constitutes a legally cognizable act of infringement.”

    Here’s what should concern OpenAI, regardless their guardrails and system card philosophy.

    Yale’s O’Brien summarized it with devastating clarity: “What’s forming now is a four-part doctrine in US law. First, only human-created works are copyrightable. Second, generative AI outputs are broadly considered uncopyrightable and ‘Public Domain by default.’ Third, the human or organization utilizing AI systems is responsible for any infringement in the generated content. And, finally, training on copyrighted data without permission is legally actionable and not protected by ambiguity.”

    The impact on creativity

    What’s interesting about creativity is that its not just about imagination. Webster’s first definition of creationwas “to bring into existence.” A second definition was “to produce or bring about by a course of action or behavior.” and a third definition “to produce through imaginative skill.”

    All of these definitions are applicable to any medium, such as oil paints or film cameras. All of them are about manifesting new things.

    The US Copyright Office has released a new ruling on AI art that could change everything.

    This is something I think about a lot because when I used to take nature photos on film my images were OK. I spent a lot of money on chemical processing and enlarging and was never satisfied. As soon as I had Photoshop and a photo-printer, my pictures were worthy of hanging up on the wall. My creative ability wasn’t limited to photography. It was the combination of pointing the lens, capturing 1/250th a second onto film, and then bringing the image to life using digital means.

    In the world of generative AI, the question of creativity poses a particular challenge. The US Copyright Office claims that only works created by humans can be protected. Where is the line drawn between the tool, medium, and human?

    Take Oblivious. A painting I “made” using Midjourney’s AI generative and Photoshop’s retouching abilities. The composition was entirely my own, but the tools used were digital. Bert Monroy

    wrote the first Photoshop book. He uses Photoshop for creating amazing photorealistic pictures. He doesn’t retouch a photo. Pixel by pixel he creates new images that look like photographs. He uses the medium as a way to explore his incredible skills and creativity. Is it human-made or is it just because Photoshop controls pixels that it is unworthy of copyright?

    Monroy was asked for his thoughts on generative AI and creative thinking. He said:

    “I have been a commercial illustrator and art director for most of my entire life. My clients had to pay for my work, a photographer, models, stylists, and, before computers, retouchers, typesetters and mechanical artists to put it all together. Now AI has come into play. The first thought that comes to my mind is how glad I am that gave up commercial art years ago.

    “With AI, the client can think about what they want, write a prompt, and the computer will create a variety in minutes, at NO cost other than the electricity used to run the computer. AI is taking over many jobs. It appears that the creative fields will be affected.”

    Sora 2 is the harbinger of the next step in the merging of imagination and digital creativity. Yes, it can reproduce people, voices, and objects with disturbing and amazing fidelity. But as soon as we considered the way we use the tools and the medium to be a part of artistic expression, we agreed as a society that art and creativity extend beyond manual dexterity.

    Also: There’s a new OpenAI app in town – here’s what to know about Sora for iOS

    There is an issue here related to both skill and exclusivity. AI tools democratize access to creative output, allowing those with less or no skills to produce creative works rivaling those who have spent years honing their craft.

    In some ways, this upheaval isn’t about cramping creativity. It’s about democratizing skills that some people spent lifetimes developing and that they use to make their living. That is of serious concern. I make my living mostly as a writer and programmer. Both of these fields are enormously threatened by generative AI.

    But do we limit new tools to protect old trades? Monroy’s work is incredible, but until you realize all his artwork is hand-painted in Photoshop, you’d be hard-pressed not to think it was a photograph by a talented photographer. Work that takes Bert months might take a random user with a smartphone minutes to capture. But it’s the fact that Monroy uses the medium in a creative way that makes all his work so incredibly impressive.

    Maly Lyhas served as chief marketing officer at GoFundMe, global head of growth and engagement at Eventbrite, promotions manager at Nintendo, and product marketing manager at Lucasfilm. She held similar positions at legendary game developers Square Enix, and Ubisoft. She is the founder and CEO at Wondr, an AI startup for consumers. In this context, her perspective is particularly insightful.

    According to her, “AI video forces us to confront a question with new stakes – who owns the output if the inputs are all we’ve made? Copyright was designed for a world where there were few creators and fewer copies. AI creates by remixing and creating in abundance. We don’t see creativity being stolen, we’re seeing its multiplicity.”

    Also: How to get Perplexity Pro free for a year – you have 3 options

    The fact that generative AI is eliminating the scarcity of skills is terrifying to those of us who have made our identities about having those skills. But where Sora and generative AI start to go wrong is when they train on the works of creatives and then feed them as if they were new works, effectively stealing the works of others. This is a huge problem for Sora.

    Ly has an innovative suggestion: “The opportunity is not protection, but participation. Each artist, voice, or visual style that inspires, trains, or influences a model, should be traceable, and rewarded, through transparent value flows. The next copyright system is going to look less like paperwork, and more like a living code – dynamic, fair, built for collaboration.”

    Unfortunately, she’s pinning her hopes for an updated and relevant copyright system on politicians.

    But still, she does see an overall upside to AI, which is refreshing among all the scary talk we’ve been having. She says, “AI video, if we get it right, could become the most democratizing story-telling medium in history. It would create a shared, accountable creative economy, where inspiration pays its debts.”

    What is real?

    Another societal challenge arising from the introduction of new technologies is how they change our perception of reality. Heck, there’s an entire category of tech oriented around augmented, mixed, and virtual reality.

    Probably the single most famous example of reality distortion due to technology occurred at 8 p.m. New York time on Oct. 30, 1938.

    Also: We tested the best AR and MR glasses: Here’s how the Meta Ray-Bans stack up

    World War II hadn’t yet officially begun, but Europe was in crisis. In March, Germany annexed Austria without firing a shot. In September, Britain and France signed the Munich Agreement, which allowed Hitler to take part of what was then Czechoslovakia. Japan had invaded China the previous year. Italy, under Mussolini, had invaded Ethiopia in 1935.

    The idea of invasion was on everyone’s mind. Into that atmosphere, a 23-year-old Orson Welles broadcast a modernized version of H.G. Wells’ War of the Worlds on CBS Radio in New York City. The show began with disclaimers (think of these as the Sora watermarks in the videos), but those who tuned in after the first few minutes thought they were listening the news and that an actual Martian invasion had taken place in Grovers Mill in New Jersey. Deepfakes are images, audio or video that are manipulated to misrepresent the truth, especially for political or nefarious purposes. Everyone knows that Star Wars and Star Trek are fiction.

    Admittedly, I make this look good. In reality, I’m wearing yellow T-shirt with a flannel jacket. I created the image using Google’s Nano Banana. David Gewirtz/ZDNET (19659072) But when deepfakes can be used to promote an agenda or damage someone’s reputation, it becomes harder to accept. As The Washington Post reported via MSN, twisted fakes of deceased celebrities are painful for their families.

    Robin Williams’ daughter Zelda was quoted in the article as saying: “Stop sending AI videos of Dad…To watch the legacy of real people being condensed to… horrible TikTok slop puppeteering is maddening.””

    Many AI tools prevent users from uploading images and clips of real people, although there are fairly easy ways to get around those limitations. The companies are embedding provenance hints in the digital media to flag images and video as AI-created.

    Loti AI’s deepfake detection service is now available to all users for free.

    Will these efforts stop deepfakes, though? This is not a brand new problem. Irish photo restoration artist Neil White documents faked photos that date back to before Photoshop and Sora 2. One of the faked photos is an 1864 photograph of General Ulysses. S. Grant riding a horse and posing in front of troops is a fake, as is a 1930 photograph of Stalin that was airbrushed.

    The most bizarre picture is a 1939 photo of Canadian Prime Minister with Queen Elizabeth, (the mother to Elizabeth II, our familiar monarch). The PM decided that it would be better to appear on a poster with just the queen and not King George VI. So he airbrushed him out.

    The problem is not going away. We’ll have to use our inner knowing, and highly-tuned BS detections, to flag images and videos most likely to be fabricated. It was still fun to make OpenAI’s CEO sing ZDNET praises and have blue hair.

    What this all means for the future

    Attorney Richard Santalesa, a founding partner of the SmartEdgeLaw Group, focuses on data security and intellectual property issues.

    According to him, “Sora 2, most notably, highlights the push-pull between creation and protecting existing IP and copyright laws.” The opt-out/opt-in issue is fascinating, because it applies the privacy notice framework and consent framework for AI creation. This is unique. This is why I think OpenAI was caught off guard.”

    He explains why the company, with its very deep pockets, may well be the target of a flood of litigation. “Copyright gives the owner a variety of exclusive rights under US Copyright law, which includes the creation (but not necessarily transformative works). All of these are legal terms that can be important in the real world, but not always. Fair use is a hot topic, but I believe that the only way to avoid copyright liability for Sora 2’s output would be through parody or news-style uses.”

    Santalesa did point out one factor in OpenAI’s favor. “The Sora 2 App’s Terms of Service expressly prohibits users from ‘using our Services in a manner that infringes or violates anyone else’s rights.’ This prohibition is fairly standard in online ToUs and acceptable user policies. However, it highlights that the actual user also has their own responsibilities with regard to copyright.”

    As Richard says, “It’s too late to put the genie back in the bottle. The question is how to control and manage the genie. OpenAI’s video creation tools are designed to support creativity, not replace, it. They help anyone explore new ideas and express themselves.”

    What about you? Have you experimented with Sora 2 or other AI video tools? Do you think creators should be held responsible for what the AI generates, or should the companies behind these tools share that liability? How do you feel about AI systems using existing creative works to train new ones? Does that feel like theft or evolution? And do you believe generative video is expanding creativity or eroding authenticity? Let us know in the comments below.

    Want more stories about AI? Sign up for Innovationour weekly newsletter.


    You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletterand follow me on Twitter/X at @DavidGewirtz on Facebook at Facebook.com/DavidGewirtz on Instagram at Instagram.com/DavidGewirtzon Bluesky at @DavidGewirtz.comand on YouTube at YouTube.com/DavidGewirtzTV.

    Artificial Intelligence (

    )

www.aiobserver.co

Exit mobile version