Adobe Brings Generative AI Video Editing Tools For Premiere Pro Users: All Details

admin admin | 04-18 16:20

Adobe has good news for video editors, as it has announced plans to introduce new tools powered by generative AI (artificial intelligence) technology within the popular video editing platform Premiere Pro. These new tools will allow editors to add or remove an object from a video using a simple text prompt, powered by Firefly AI, including a new video generation model. It is expected that the new model will be available for Premiere Pro users starting in May 2024.

Currently, Adobe is also partnering with OpenAI and Runway. However, it will soon add more AI-backed video editing abilities to Premiere Pro in the coming days. This comprises an option to add a few more seconds to footage using the generative extend tool and the ability to generate a B-roll.

The new tool in Premiere Pro allows editors to quickly choose an object, track it, and remove an unwanted object from the frame with just a single input, like “quickly add set dressings such as photorealistic flowers or painting on a desk” or “change an actor’s wardrobe.” The object addition or removal generative extend feature can add frames to a video to make it longer, which is beneficial for tasks such as fine-tuning edits or designing a better transition between shots.

This is driven by both in-house Firefly models and third-party models such as OpenAI’s Sora, Pika, and the Runway large language model. Pika can extend shots, Sora, or Runway AI when generating a B-roll. Lastly, Adobe demonstrated a new text-to-video model that can produce entirely new footage within Premiere Pro by simply typing a text prompt or uploading a reference image.

Moreover, Adobe will soon introduce a new feature called content credentials, which makes it easy to identify if the content is original or created by AI using a watermark.

The new initiative by Adobe can help foster partnerships with third-party AI companies by explaining the third-party AI integrations in its video preview as an “early exploration” of what these might look like in the future.

The content credentials labels can be applied to the generated clips to identify which AI models have been used to generate them,” Adobe emphasised. In addition to that, Adobe has also demonstrated new audio workflows, including effect badges, interactive fade handles, essential sound badges with audio categories, and a redesigned waveform in the timeline.

Disclaimer: The copyright of this article belongs to the original author. Reposting this article is solely for the purpose of information dissemination and does not constitute any investment advice. If there is any infringement, please contact us immediately. We will make corrections or deletions as necessary. Thank you.


ALSO READ

China's Zeekr launches EV in Australia, eyes New Zealand next

Chinese EV maker Zeekr's has begun sales of its first model for Australia. Chinese EV maker Zeekr's ...

Hyundai is for the long haul and do not expect to make quick buck on listing: Dipan Mehta

Dipan Mehta, Director, Elixir Equities.Dipan Mehta, Director, Elixir Equities, says Hyundai compares...

EV chipmaker Wolfspeed set to receive USD 750 million US chips grant

Wolfspeed's devices are used for renewable energy systems, industrial uses and artificial intelligen...

Rio Tinto Q3 iron ore shipments rise, Simandou on track for 2025

Rio said iron ore production from its Iron Ore Company of Canada (IOC) operations fell 11% following...

Hyundai issue is for long-term investors; expect 16-18% growth in next 2-3 yrs: Narendra Solanki

Narendra Solanki, Head Fundamental Research-Investment Services, Anand Rathi Shares & Stock Brok...

Electric car sales have slumped, misinformation is one of the reasons

The politicisation of green initiatives adds to the challenge. When electric vehicles become associa...