This is the video editing tool for the age of generative AI.
Adobe is about to have the most advanced generative AI video creation capabilities.
Today, Adobe announced an update plan for a new version of Premiere Pro. This includes adding plug-ins for third-party AI video generation models, whether it is OpenAI's Sora or Runway's Gen-2 and Pika, which will soon appear in the Adobe tool system and be used by people.
Now, based on the capabilities of Adobe’s own large model Firefly, you can add or subtract content directly on the video footage.
A shot that feels like the background is less emphasized during the transition? Now just use OpenAI's Sora to automatically generate a paragraph.
## Sora can create up to three video variations for natural language prompts per API call.
The new tool can also be used to "extend" the length of existing footage out of thin air, but this part does not seem to be controlled by the prompt word.
Since OpenAI launched Sora in February this year, we have only been able to see official demos of AI-generated videos in TikTok accounts. The real tool is still "soon to be released" status. Now that the new version of Adobe Premiere Pro has been unveiled, it may indicate that the new technology will soon be launched.
For Adobe’s 33 million Creative Cloud paying users, this is even more of a big deal, and it may bring about the most radical and revolutionary changes in the design field. .
With this feature, Premiere Pro users will be able to edit, process and mix with live action video captured by traditional cameras as well as AI footage. Imagine taking a video of an actor performing a scene of escaping from a monster, and then using artificial intelligence to generate the monster - this step requires no props, costumes, or actors, and the two video clips can be accessed in the same editor and combined in in the same video file.
The same is true for animations created using more mature processes (from computer to hand-drawn frames), which can be blended with matching AI footage in the same file on Premiere Pro.
It is worth mentioning that from Adobe’s demonstration, we once again saw that there is a generation gap between OpenAI’s Sora and other similar products - The resulting video is much better than other available tools.
Unlike many of Adobe’s previous Firefly-related announcements, this time around the new video generation tools don’t yet have a release date, with Adobe only saying they will launch this year.
#The introduction of third-party advanced AI large models is a future-oriented exploration for video processing. According to Adobe, the idea is to give Premiere Pro users more options. Adobe also says its content credentials tags can be applied to these generated clips to identify which AI models were used to generate them.
Adobe’s Premiere Pro (PR) has been one of the most popular video editing programs in the world since it was first released for Mac in late 1991. Used by major Hollywood film editors and independent filmmakers. It is about to undergo a revolution unprecedented in its 33-year history.
It should be noted that Adobe has not determined when these third-party AI video generators will be integrated into Premiere Pro, and the details do not appear to be fully finalized, and many third-party tools will require paid subscriptions after release.
In addition, Adobe also uses its own in-house generative AI products (such as Firefly and Generative Fill, etc.), emphasizing that its models are based on the models it owns or has licensed/have It is trained on data using rights such as content contributed by Adobe Stock creators (although this is much to the chagrin of some Adobe Stock photographers and artists).
Adobe focuses on multimedia creation and creative software products , after the outbreak of generative AI technology, this company quickly joined the battle not to be left behind.
Last week, Bloomberg reported that Adobe trained Firefly on some images generated by competitor Midjourney, which itself is based on the open source AI model Stable Diffusion. Train on accessed and copyrighted web data.
Today, Adobe announced that a version of the Firefly text-to-
image generationmodel will be integrated into Premiere Pro "later this year," providing A new set of "generative AI workflows" and functions.
For example, the Generative Extension feature will let video editors and filmmakers "seamlessly add frames to make video clips longer" without having to shoot any new ones. lens, this can be a very useful and money-saving feature. Adobe also says it will allow for smoother transitions in videos, such as extending clips that end too abruptly to stay on a moment or action longer.
Firefly for Video will also ensure that Premiere Pro users can perform intelligent Object Detection and Removal, essentially highlighting objects in the video (props, characters, clothing, scenery, etc.) and allow the AI model to track them across different frames. Users can also leverage generative AI to edit these objects into new objects, quickly change a character's clothing or props, or even remove objects entirely across multiple clips and camera angles.
Finally, Firefly for Video will also come with a text-to-video image generator, making it comparable to Sora, Runway, Pika, and Stable Video Diffusion.
Although still only in preview, Adobe’s next-generation AI integration for Premiere Pro is already winning applause from filmmakers and social media creatives.
"This has been very helpful for my work on live-action films," said director Kevin K. Shah.
https://venturebeat.com/ ai/adobe-to-add-ai-video-generators-sora-runway-pika-to-premiere-pro/
https://www. theverge.com/2024/4/15/24130804/adobe-premiere-pro-firefly-video-generative-ai-openai-sora
https:/ /www.adobe.com/products/premiere/ai-video-editing.html
The above is the detailed content of Sora joins the Adobe family bucket, and the video is modified to include pictures and scenes: PR update preview. For more information, please follow other related articles on the PHP Chinese website!