Home Technology Computer Vision Luma Labs Modify Video Tool allows you to reimagine scenes and reshoot...

Luma Labs Modify Video Tool allows you to reimagine scenes and reshoot them without having to reshoot

0
Luma Labs Modify Video Tool allows you to reimagine scenes and reshoot them without having to reshoot
(Image credit: Luma Labs)
The possibilities are endless, from subtle wardrobe changes to complete magical scene overhauls.

The Modify Video feature is similar to the best Photoshop tools for images. It can change the setting, style and even characters in a scene without having to re-shoot, reanimate or even stand up.

According to the company, AI video editing will preserve all the details that are important to you, including actor movement, framing and timing, as well as other key details. However, you can change anything else.

You’ve decided that the outfit you are wearing is not you. It’s now a completely different set of clothes. This blanket fort has been transformed into a ship on a stormy ocean, and the friend flailing around the ground is now an astronaut in outer space. All without using green screens or editing rooms.

Luma combines advanced motion and performance recording, AI styling and what it calls “structured presets” to create a wide range of reimagined video.

To get started, you only need to upload a video up to 10 seconds long. Select from the Adhere presets, Flex presets, or Reimagine pre-sets.

The Adhere preset is the most subtle; it focuses on small changes, such as clothing adjustments below or different textures of furniture. Flex can do that, but it can also change the style of the video and the lighting. Reimagine can transform the video into another world, or turn people into cartoons, or send someone standing on a flatboard into a hoverboard race.

Sign up to receive breaking news, reviews and opinions, as well as top tech deals.

Flexible AI video

You can choose to use reference images or frame selections in your video. The process is now more flexible and user-friendly.

While AI video modification is not unique to Luma’s software, the company claims that it outperforms competitors like Runway and Pika because of its performance fidelity. The altered videos maintain the actor’s facial expressions, body language, and lip sync. The final result is a seamless whole, not a collection of pieces.

Modify Video has limitations. For now, the clips are limited to 10 seconds each. This keeps wait times manageable. If you want to make a longer film you will need to plan it and figure out how to incorporate different shots artistically into one film.

But features like the ability of isolating elements within a shot, are still a big deal. You may have a great performance, but the character is supposed to be in a completely different setting. You can keep your performance and just swap the garage for the sea, and the actor’s legs for fish tails.

Dreams to Reality

The AI tools are able to rework footage in a very quick and thorough manner. These tools aren’t a gimmick. The AI models are aware that performances and timelines are being observed in a way I have never seen before. The AI models do not understand pacing, structure, or continuity, but they are excellent at mimicking them.

Although technical and ethical limitations prevent Luma Labs to recreate the entire cinema, these tools are attractive for many amateur video producers. While I don’t think it will be as popular as photo filters, Luma’s demos have some fun ideas that you may want to try.

Sora must up its game in order to compete with the new Runway AI model video

  • When I added audio to Dream Machine videos, Sora’s quietness was deafening.
  • Eric Hal Schwartz has been a freelance writer at TechRadar for more than 15 years. He has covered the intersection of technology and the world. He was the head writer of Voicebot.ai for five years and was at the forefront of reporting on large language models and generative AI. Since then, he has become an expert in the products of generative AI, including OpenAI’s ChatGPT and Anthropic’s Claude. He also knows Google Gemini and all other synthetic media tools. His experience spans print, digital and broadcast media as well as live events. He’s now continuing to tell stories that people want to hear and need to know about the rapidly changing AI space and the impact it has on their lives. Eric is based out of New York City.

    www.aiobserver.co

    NO COMMENTS

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Exit mobile version