Google’s Will Smith double is better at eating AI spaghetti … but it’s crunchy?

Will Smith’s double at Google is better than Will Smith at eating AI spaghetti, but it’s not crunchy. On Tuesday, Google I see 3is a new AI model for video synthesis that can create a synchronized track. This is something no other major AI video generator could do before. From 2022 to 2024 we saw the first steps in AI video creation, but each video was silent, and most were very short. Now you can hear voices and sound effects on eight-second high definition video clips.

Shortly following the launch of Veo 3, people began asking the obvious benchmarking questions: How well does Veo 3 fake Oscar-winning actor Will Smith eating spaghetti?

A brief recap. The spaghetti benchmark for AI video dates back to March 2023 when we covered an early example horrifying AI-generated video created using an open-source video synthesis model named ModelScope. Smith parodied the spaghetti example almost a full year later, in February 2024.

This is what the original viral video appeared like:

People forget that the Smith example was not the best AI video creator at the time–a video synthesis system called Gen-2, from Runway already achieved superior results, even though it was not publicly accessible. The ModelScope results were funny and bizarre enough to stick with people as an early example of video synthesis. This will be useful for future comparisons, as AI models improve.

AI developer Javi Lopez came to the rescue of curious spaghetti fans this week, with Veo 3. He performed the Smith test. Posting the results to X. As you’ll see below, the soundtrack is a bit odd: the fake Smith looks like he’s crunching the spaghetti.

Veo 3’s experimental sound effects feature is a glitch, probably because the data used to build Google’s AI models included many examples of chewing lips with crunching sounds. The Generative AI models work as pattern-matching prediction engines, and need to be exposed to a variety of media in order to produce convincing new outputs. You may see jabberwockies or other unusual results if a concept is under-represented or over-represented in your training data.

www.aiobserver.co

More from this stream

Recomended