Previously, in a live broadcast event, NVIDIA CEO Jensen Huang announced the launch of DLSS 3. The full name of DLSS is Deep Learning Super Sampling. It is a deep learning super sampling technology released by NVIDIA. It has a groundbreaking optical multi-frame generation function, which provides three indispensable elements for the game: increasing the frame rate and maximizing the frame rate. Improve responsiveness and image quality.
Across a range of games and engines, DLSS 3 helps render the GeForce RTX 40 The performance of the series has doubled:
#At the same time, NVIDIA Vice President Bryan Catanzaro also declared on Twitter: "Neural rendering in the Ada architecture A very important step has been taken with the support of DLSS 3.0! In addition to DL-driven super-resolution, it also uses optical flow, motion vectors and DL to generate the entire frame. 7 out of 8 pixels rendered by DLSS 3 are from neural Rendered." This is enough to increase rendering speed by 5 times.
Although this amazing technology is currently limited to a few dozen 3D games, neural rendering is quickly becoming Will bring benefits. This technology will unlock new potential in everyday consumer electronics.
In most cases, DLSS 3 provides two to three times the performance improvement over traditional rendering technologies at 4K resolution. While Nvidia currently leads the pack, it also has competitors such as Intel's AI-powered high-end software XeSS (Xe Super Sampling), and AMD's RDNA 3 graphics architecture.
Games are leading the wave of neural rendering because they are well suited to using machine learning techniques. Jon Barron, a senior researcher at Google, said: "Just looking at small patches of images and trying to guess what is missing in the image is very suitable for machine learning. Machines are good at identifying similarities between frames, including those with frame rates high enough to mask motion. Minor errors."
However, DLSS 3 also has imperfections, and it has flaws in scene conversion. But Barron and Catanzaro think adding training data to the neural rendering model can remedy this shortcoming.
Anton Kaplanyan, vice president of graphics research at Intel, believes that neural rendering technology will make 3D content creation more accessible in the future. It is not difficult to see that today's social networks have gradually become commoditized. People only need to click a button, take a photo, and share it with family and friends. If we want to take this experience to 3D, we need to attract people who don't understand professional tools to become content creators.
In 2023, the rate at which 3D neural rendering improves will determine its future development. But compared with traditional rendering, researchers lack more experience. Barron pointed out: "Computer graphics is amazing, it works very well, and we have many ways to solve problems, and these methods may always apply."
The next question is , when will the graphics industry accept 3D neural rendering as an alternative. This transition can be fraught because betting on the wrong technology or the wrong architecture is costly.
Despite this, Catanzaro believes 3D neural rendering is already unstoppable. He said: "We will see many more extraordinary neural rendering technologies. Some of these technologies can do shadows, refraction and reflection. In the future, we will consider neural rendering methods that are stronger than DLSS. I think the future of graphics field There will be multiple methods in parallel."
The charm of neural rendering lies not only in its potential performance, but also in its potential efficiency. DLSS 3 can increase frame rates with the 530% gain that RTX delivers in Portal games—or reduce power consumption by limiting frame rates to a target. In this way, DLSS 3 can reduce the cost of rendering each frame.
NVIDIA DLSS 3
This should not be underestimated, because consumption The electronics field is facing an important problem, that is, Moore's Law is gone. Even if it is not there, it is just lingering. Catanzaro said: "As you know, Moore's Law has lost momentum, and I personally think that the post-Moore image is the neural image. For Nvidia, neural rendering has become a representative method, it does not need to be multiplied The number of transistors alone can bring huge benefits to the company."
However, Intel's Kaplanyan does not think Moore's Law will die, but he also agrees that neural rendering can improve efficiency. He said: "The size of the chip can be solved, and I agree that we have a great opportunity to use this energy and this field more effectively through machine learning algorithms to produce new visual effects."
AMD, Nvidia and Intel are all working with device manufacturers to design new consumer laptops and tablets, so efficiency has become a battleground for all three companies. For device manufacturers, increased efficiency has led to thinner, lighter devices with longer battery life, while also improving the functionality of devices for users.
Clearly, 2023 will be the foundation year for neural rendering in consumer devices. The NVIDIA RTX 40 series with DLSS 3 support will launch in a variety of desktops and laptops; Intel is expected to expand its Arc graphics line with the upcoming Battlemage architecture; AMD will launch more variants using the RDNA 3 architecture.
The launch of these products laid the foundation for the graphics revolution. Of course, this doesn’t happen overnight and will take some work – but as consumers increasingly demand more visuals and content creation, through smaller, thinner innovations, neural rendering may be the best way to deliver it.
The above is the detailed content of Neural rendering is combined with the AI generation framework to increase game speed by five times. This is how NVIDIA does it. For more information, please follow other related articles on the PHP Chinese website!