Nvidia has released a new beta version of its RTX kit that features "Neural Rendering" and thereby offers a glimpse into the potential future of gaming. DLSS already utilizes artificial intelligence to scale games to higher resolutions and generate three additional frames per second, but Nvidia's upcoming AI shaders take things a step further.
One of the highlights of these new AI features is called RTX Neural Materials. This technology uses artificial intelligence to compress complex materials and allows them to be rendered up to five times faster, which means that games can rely on much more detailed assets. Furthermore, RTX Mega Geometry recalculates the geometry of objects in real time in order to enable higher-resolution path tracing and thus more realistic lighting.
RTX Neural Texture Compression, on the other hand, is designed to compress thousands of textures in under a minute. This can reduce graphics memory usage to one seventh compared to conventionally compressed textures of the same quality.
Up to 96% less VRAM usage at the cost of performance
The embedded video from Compusemble showcases RTX Neural Texture Compression in action and illustrates that AI-based compression can effectively reduce the memory footprint of a 1440p model by up to 96% compared to the reference texture. However, the average frame rate drops by 5.6% as compressing textures requires some computing performance. At higher resolutions, the performance difference grows even bigger. With 4K textures, the average FPS of an Nvidia GeForce RTX 4090 are almost cut in half.
Nevertheless, in games that are noticeably suffering from the very limited VRAM of GPUs like the GeForce RTX 4060 (from $299 on Amazon), this texture compression could lead to a significantly higher frame rates. Since Neural Texture Compression is calculated by tensor cores, performance should be better on GeForce RTX 5000 graphics cards compared to older GPUs.
RTX Neural Faces could make digital human characters more realistic
Another goal of Nvidia's RTX kit is to make human characters in games so realistic that they are almost indistinguishable from real people. In order to achieve this, Nvidia combines raster data of a face from photos or AI-generated images with 3D movement data to create the most realistic model possible. According to Nvidia, training such a model requires thousands of photos of a real person, from every angle and with every emotion that should be reproduced later.