After Sora, there is actually a new AI video model, which is so amazing that everyone likes it and praises it!
Pictures
With it, the villain of "Kuronics" Gao Qiqiang transforms into Luo Xiang, and he can educate everyone (dog head).
This is Alibaba’s latest audio-driven portrait video generation framework, EMO (Emote Portrait Alive).
With it, you can generate an AI video with vivid expressions by inputting a single reference image and a piece of audio (speech, singing, or rap can be used). The final length of the video depends on the length of the input audio.
You can ask Mona Lisa, a veteran contestant of AI effects experience, to recite a monologue:
Here comes the young and handsome little plum. During this fast-paced RAP talent show, there was no problem keeping up with the mouth shape:
Even the Cantonese lip-syncs could be held, which allowed my brother Leslie Cheung to sing Eason Chan's " Unconditional》:
#In short, whether it is to make the portrait sing (different styles of portraits and songs), to make the portrait speak (different languages), or to make all kinds of "pretentious" The cross-actor performance and the EMO effect left us stunned for a moment.
Netizens lamented: "We are entering a new reality!"
The 2019 version of "Joker" said the lines of the 2008 version of "The Dark Knight"
Some netizens have even started to pull videos of EMO generated videos and analyze the effect frame by frame.
As shown in the video below, the protagonist is the AI lady generated by Sora. The song she sang for you this time is "Don’t Start Now".
Commenters analyzed:
The consistency of this video is even better than before!
In the more than one minute video, the sunglasses on Ms. Sora’s face barely moved, and her ears and eyebrows moved independently.
The most exciting thing is that Ms. Sora’s throat seems to be really breathing! Her body trembled and moved slightly while singing, which shocked me!
##Picture
Having said that, EMO is a hot new technology, so it is inevitable to compare it with similar products—— Yesterday, the AI video generation company Pika also launched a lip synchronization function for dubbing video characters and "lip syncing" at the same time, which was a big hit. What is the specific effect? We will put it here directlyAfter comparison, netizens in the comment area came to the conclusion that they were beaten by Ali.Picture
EMO released the paper and announced it was open source. but! Although it is open source, there are still short positions on GitHub. But again! Although it is a short position, the number of stars has exceeded 2.1k.Picture
It made netizens really anxious, as anxious as King Jiji. Different architecture from Sora As soon as the EMO paper came out, many people in the circle breathed a sigh of relief. It is different from Sora’s technical route, which shows that copying Sora is not the only way.EMO is not based on a DiT-like architecture, that is, Transformer is not used to replace traditional UNet. Its backbone network is modified from Stable Diffusion 1.5.
Specifically, EMO is an expressive audio-driven portrait video generation framework that can generate videos of any duration based on the length of the input video.
Picture
The framework mainly consists of two stages:
Deploy a UNet network called ReferenceNet, which is responsible for extracting features from reference images and frames of videos.
First, the pre-trained audio encoder processes the audio embedding, and the face region mask is combined with multi-frame noise to control the generation of the face image .
The backbone network then leads the denoising operation. Two types of attention are applied in the backbone network, reference attention and audio attention, which serve to maintain the identity consistency of the character and regulate the movement of the character respectively.
Additionally, the time module is used to manipulate the time dimension and adjust the speed of movement.
In terms of training data, the team built a large and diverse audio and video data set containing more than 250 hours of video and more than 15 million images.
The specific features of the final implementation are as follows:
Picture
The quantitative comparison is also greatly improved compared to the previous method to obtain SOTA, only measuring mouth shape The SyncNet indicator of synchronization quality is slightly inferior.
Picture
Compared with other methods that do not rely on diffusion models, EMO is more time-consuming.
And since no explicit control signals are used, which may lead to the inadvertent generation of other body parts such as hands, a potential solution is to use control signals specifically for body parts.
Finally, let’s take a look at the people on the team behind EMO.
The paper shows that the EMO team comes from Alibaba Intelligent Computing Research Institute.
There are four authors, namely Linrui Tian, Qi Wang, Bang Zhang and Liefeng Bo.
Picture
Among them, Liefeng Bo is the current head of the XR laboratory of Alibaba Tongyi Laboratory.
Dr. Bo Liefeng graduated from Xi'an University of Electronic Science and Technology. He has engaged in postdoctoral research at Toyota Research Institute of the University of Chicago and the University of Washington. His research directions are mainly ML, CV and robotics. Its Google Scholar citations exceed 13,000.
Before joining Alibaba, he first served as chief scientist at Amazon’s Seattle headquarters, and then joined JD Digital Technology Group’s AI laboratory as chief scientist.
In September 2022, Bo Liefeng joined Alibaba.
Picture
EMO is not the first time Alibaba has achieved success in the AIGC field.
Picture
OutfitAnyone with AI one-click dress-up.
Pictures
There is also AnimateAnyone, which makes cats and dogs all over the world dance the bath dance.
This is the one below:
Picture
Now that EMO is launched, many netizens are lamenting that Alibaba has accumulated some technology on it .
Picture
If all these technologies are combined now, the effect will be...
I don’t dare to think about it, but I’m really looking forward to it.
Picture
In short, we are getting closer and closer to "send a script to AI and output the entire movie".
Picture
Sora represents a cliff-edge breakthrough in text-driven video synthesis.
EMO also represents a new level of audio-driven video synthesis.
Although the two tasks are different and the specific architecture is different, they still have one important thing in common:
There is no explicit physical model in the middle, but they both simulate physical laws to a certain extent. .
Therefore, some people believe that this is contrary to Lecun's insistence that "modeling the world for actions by generating pixels is wasteful and doomed to failure" and supports Jim Fan's "data-driven world model" idea. .
Picture
Various methods have failed in the past, but the current success may really come from "Bitter Lessons" by Sutton, the father of reinforcement learning. Vigorously miracle.
Enabling AI to discover like people, rather than containing what people discover
Breakthrough progress is ultimately achieved by scaling up computing
Paper: //m.sbmmt.com/link/a717f41c203cb970f96f706e4b12617bGitHub://m.sbmmt.com/link/e43a09ffc30b44cb1f0db46f87836f40
Reference Link:
[1]//m.sbmmt.com/link/0dd4f2526c7c874d06f19523264f6552
The above is the detailed content of AI video explodes again! Photo + voice turned into video, Alibaba asked the heroine Sora to sing and rap with Li Zi.. For more information, please follow other related articles on the PHP Chinese website!