CheckMag | Radeon cards and 3D rendering jobs do not mix. Hobbyists and artists aren't amused about it
Although graphics cards cater to gamers more often than not, many applications today can make use of the computing power provided by a good graphics card to accelerate various workloads. AI may be the main driving force behind today’s GPU demand, but 3D content production was and is a common use case, too. This is precisely where Radeon graphics card underdeliver as nearly all of the professional 3D rendering engines leverage Nvidia’s CUDA technology. AMD’s competing HIP API on the other hand is rarely implemented.
Blender, one of the favourites among hobbyists and experts alike for its open-source nature, lets Radeon users utilize AMD ProRender allowing them to keep up with their CUDA-toting competitors with some differences in the final output. Updates for the HIP-powered engine takes time however, as the latest version is only compatible up to Blender version 4.0 as of this writing. ZLUDA is a third-party effort to make CUDA rendering work on Radeon graphics cards by translating CUDA instructions to HIP/ROCm, but it’s not all that fast and professionals (who normally need top reliability and predictability) wouldn’t want to have anything to do with emulation anyway.
Are you a techie who knows how to write? Then join our Team! Wanted:
- News translator (DE-EN)
- Review translation proofreader (DE-EN)
Details here
In the gaming world, the Radeon RX 7900 XTX holds a notable performance lead over the RTX 4080/Super while having a whopping 50% more VRAM than its Nvidia rival. These facts make the Radeon offering seem like a great choice for 3D content creators, but GeForce users have a trick up their sleeve. They can take advantage of their existing RT Cores through the OptiX engine, boosting their rendering speeds to let even a mid-range GPU like the desktop RTX 4070 beat Radeon's top dog in Blender's rendering benchmarks.
While desktop RTX 4080 owners probably do not mind having to be content with "just" 16GB of VRAM as long as the card leaves the RX 7900 XTX in the dust, things are very much different with Nvidia’s lower-end offerings. Those on a budget end up being severely constricted by the much smaller VRAM pools. This is where Radeon’s lower-tier cards could have shined with their ample VRAM buffers - if it wasn't for lacklustre HIP/ROCm adoption, that is.
A freelance artist like myself would naturally want to build a PC that can do both gaming and 3D content production equally well; unfortunately, a Radeon card would fall short. Any aspiring hobbyist or artist asking the pros for the right graphics card will always get one answer: Go Nvidia, because processing speed is paramount for creatives making a living from 3D art, which means that any plans to build an all-AMD PC are off the table. This isn’t a problem for Jensen Huang’s fans, but it does mean that both hobbyists and commercial customers are left with no options but to pay up for an Nvidia GPU to get the rendering performance they need.
Nvidia has a lot of experience in the field as well as longstanding partnerships with industries and companies that use CUDA. In the meantime, AMD’s software stacks are relatively new and need more time to become competitive. At least in the AI field, AMD is aware that it’s on the back foot with ROCm and is making substantial acquisitions to remedy that, which other workloads should see improvements from. In the meantime, third parties are also working hard to bridge the gap between CUDA and HIP. In addition to the aforementioned ZLUDA, Spectral Compute’s SCALE tool is being worked on in an effort to let non-Nvidia GPUs run CUDA directly without a translation layer.
Moves are being made in the right direction, but it'll take a lot of time and effort for AMD to break the stranglehold Nvidia has on the 3D content industry. It's no easy task to sway people who have been recommending GeForce for over a decade to your side.