The Future of Marketing briefing covers the latest marketing news for Digiday+ subscribers and is sent out via email every Friday morning at 10 a.m. ET. SpongeBob is the latest thing that keeps marketers awake at night when it comes to AI. It’s not deepfakes, or disinformation.
SpongeBob has become a mascot of the AI-generated videos that are flooding feeds. They blur the lines between parody and harmless fun, copyright violation and misinformation. In the panic that followed, it became clear that AI-generated videos, like news and user generated content before, exist on a sliding-scale — from harmless to harmful — and people watch it.
Where do marketers draw the line? While it is fashionable to label everything as “AI slop”marketing professionals are learning that it’s not quite that simple. AI slop does exist, but to use it as a blanket term for all AI-generated material is to miss the point. What one person considers slop is what another person finds must-watch.
It’s all about discernment. Knowing when automation is noise and when it still serves the story.
According to Steven Filler, U.K. country manager at digital ad company ShowHeroes, “from the conversations we are having with agencies and marketers, the general opinion is that this topic is on their radar, but they do not have answers yet.” “It has reached a point where marketers are realizing that they must act quickly given the amount of content that is being created.”
The same old brand safety discussion is resurfacing, and as before, it’s sending them back to the cottage industries of measurement and verification companies built to help make the decision.
Zefr hosted a series workshops with agency leaders and marketers to help them make the most of the situation. They broke down the types AI-generated content that are driving views across platforms, and worked with these executives to determine what they would like to appear alongside and what they would rather avoid.
However, these decisions are not permanent. What seems safe today could be problematic tomorrow, as AI-driven trends are emerging every hour. Marketers must keep an eye on the speed of content production.
This is why Zefr created a tool that tracks AI-generated content appearing in ad campaign, similar to the way traditional brand systems flagged risky content across all platforms. It allows marketers to see where their ads appear and whether the location feels like a benefit or a liability.
Andrew Serby, chief executive officer at Zefr, said that this will be the next big problem in brand safety.
It won’t be just a safety issue for long. It will be a test to see how much chaos, creativity, and algorithmic weirdness brands are willing to tolerate. Anudit Vikram is the chief product officer of digital video optimization company Channel Factory. He said that whatever signals content emits, from credibility to virality or contention, they should be independent from the fact that they were generated by an AI. “As long these signals align with the brand of a marketer, it’s up to them to decide whether or not they wish to be associated with this content.”
His team is in the early stages to help marketers do just that.
The first step is to show them whether the video was made by AI. It sounds simple enough — AI videos often have telltale seams. But the fidelity is improving quickly. The monitoring stack must keep up with the changes, detecting everything from lip-synch anomalies and frame-level artifacts to metadata patterns, watermarking, and audio cues.
Then Channel Factory incorporates the AI classification into an analysis of the content, the channel, and other factors such as age, language, and gender classification. The goal is to provide marketers with a better understanding of the AI-generated videos that they would like to use in their ads and those they should avoid before Spongebob or whatever comes after becomes the next brand safety disaster. It may not be today or tomorrow, yet that moment will come. It always happens.
“Most are just trying to understand the future of AI-generated video,” says Lindsey Gamble. She is currently having these conversations. There are too many risks to take right now, such as brand safety, where content could appear, and possible copyright violations. They are waiting to see what the other brands do. But they will wait until platforms and tools develop solutions that address these risks. It does more than manage risk; it monetizes that risk. Businesses exist now to help brands navigate a world built on anxiety, where a viral post or the wrong adjunct can cause a brand’s reputation to fall overnight. It’s not cynical at all — it’s the cost of doing businesses in an environment where technology is evolving faster than the safeguards designed to contain it.
We use a framework to help clients determine how far they are willing to push technology and what their guardrails will be, said Salazar Llewellyn. Editorial director at the ad agency DEPT. “Our approach is always human led, craft-first and augmented with AI. You can’t automate good judgment. You can use data as an informant, but it is essential to understand which content is good, worthy of your brand’s association, and what is just flooding platforms and feeds with this “slop”. In reality, AI is not always used. Creators do not always adhere to the inconsistent disclosure rules, if they even exist. This opacity makes defining what is acceptable and what is not even more difficult for mallets, especially when AI content becomes harder to detect and cheaper to purchase. YouTube, as usual, provides the clearest view of this tension. It’s a place where faceless creators of channels build legitimate direct-to consumer businesses alongside those who use the same tools for farming views within YouTube’s accepted boundaries.
This has led to a flood of AI-generated channels with wildly varying levels of quality. From faceless creators such as Kurzgesagt, who have videos that are a perfect blend of precision and craft to a sea of others who are utterly detached from any editorial judgement, or even the intent to tell truth.
The spectrum will only get wider. Platforms will ensure it. The more tools that are released, the greater the number of people who can create content. And the more content is created, the higher the engagement and revenue they can capture. The machine continues to feed itself.
The moment of reckoning is now
But for the time being, most marketers remain in a watch-and-wait state. Many were caught off guard by the launch of OpenAI’s Sora application, which helped socialize AI-based video creation. They saw the risks and rewards of it at that moment, especially when those videos started spreading across the internet and were monetized in other places.
Instead of reacting immediately, they are taking a step back — building frameworks and refining their theses while drafting policies which will shape their advertising strategy in the coming year.
Serby said, “I’d be surprised if brands don’t implement these policies into their ad campaign from Q1 of next year.”
— Reporting by Seb Joseph and Krystal Scanlon
YouTube puts a stop to AI after OpenAI’s Sora sparks a backlash
AI is a topic that has divided creators, particularly with the launch of OpenAI’s latest Sora standalone application. On the one side, there are those who use the tools to help them scale and create richer content. On the other side, there are creators who feel AI content is against everything they believe: authenticity. YouTube is, for its own part, trying to strike a balanced. Its latest moves indicate a platform eagerly attempting to differentiate itself from the chaos surrounding OpenAI – and to reassert its core constituency, creators.
Sarah Jardine is a senior strategist at SEEN Connects. She said, “We believe there’s some merit to taking the responsible path.” “If we don’t protect creator IP, then we run the risk of homogenising creativity and culture.”
Jardine is referring to YouTube policy updates including its crackdown of low-effort AI-generated slop and its roll-out of a likeness-detection for creators within its partner program. The tool flags videos which appear to use the creator’s image whether it is through altered or synthetic versions, and allows them to request removal. This is a significant first step in giving creators the power to control how their likenesses are used in an age of generative videos. OpenAI has, on the other hand, taken a different approach. Sora’s launch allowed users to create videos of real people without consent, both living and dead. This choice quickly backfired when users began creating disrespectful depictions for Martin Luther King Jr. The company’s late decision to “pause”such generations, at the request of King’s estate, highlighted a larger issue: its policies were built in real-time to respond to PR crises instead of principle. Varun Shetty vp media partnerships at OpenAI explained its stance via an emailed message: “We’re engaging with studios and rightholders, listening and learning from the way people are using Sora 2.” We see this as a great opportunity for rightsholders and fans to connect and share creativity. We’re removing generated character from Sora’s public feed and we’ll be rolling out updates to give rightsholders greater control over their characters and the way fans can create with them.”
YouTube’s play, on the other hand, looks less like moral grandstanding than it does like pragmatic ecosystem management.
By building safeguards for the likeness of creators and tightening the monetisation of low-effort AI work YouTube is protecting the quality and ecosystem, and, most importantly, the relationships between the creators, audiences, brands, says Billion Dollar Boy co-founder Thomas Walters. The approach contrasts sharply to OpenAI’s recent struggles with defining coherent IP and consent policy.
– Krystal Scanlon.
- Numbers you need to know Google’s stock dropped by $150 billion on Wednesday after the launch of OpenAI ChatGPT Atlas. 17.2%: Increase in revenue from quarter to quarter, despite Netflix missing Wall Street’s earnings expectations ($11.51 billion), which caused its stock price to fall by 8%.
- 61%: Percentage TikTok users who have purchased via TikTok shop
- 36%: Percentage marketers who say UGC is very important to their social media strategies, compared with just 2% who feel the same about AI content
Digiday explains how OpenAI made this inevitable u turn. From CEO Sam Altman stating his dislike for advertisements to hiring an ad platform engineer, Digiday explains the steps that led to it.
TikTok’s continued uncertainty in the U.S. has marketers rethinking their budgets for next year
Although TikTok’s future in the U.S. is somewhat secure — although China still needs to sign off on the deal — some marketing are already taking an cautious approach to the year 2026 until they have a clear understanding of the situation. Amazon’s next frontier for advertising: The cloud infrastructure that it runs on
Securing ad dollars is always a nice bonus but Amazon is betting big on its latest launch – a managed cloud platform built to handle high-speed and data-intensive transactions, which make programmatic advertising feasible.
Google AdX has begun striking deals
Despite AdX’s reputation as a difficult nut to crack and its reputation for being a tough nut, Google’s Ad Exchange unit offers media agencies discounts after auctions since January of this year.
The Wall Street Journal reported on how OpenAI CEO Sam Altman played all the tech giants off against each other in Silicon Valley to fuel his own agenda and growth plan.
Reality check on agents
Initially, this year was dubbed as “the year the agents”which sparked concerns about AI taking over jobs. The Information reported that since the promise of total autonomy was not actually achieved, industry leaders should lower their expectations for how quickly and how much it will impact business capabilities.
Five different ways to think about OpenAI’s browser
Casey Newton, Platformer, gave a realistic assessment of the new browser launched by OpenAI on Wednesday. He compared it with other browsers, such as Google Chrome.
Paramount is looking to compete with Amazon and Google for digital advertising dollars, but must first navigate a difficult sales situation.
According to Variety, David Ellison, Paramount’s chief revenue officer (a former Roku senior sales executive) Jay Askinasi will be responsible for the ad sale operations.
