The Nvidia RTX 3500 Ada Generation is a higher-end professional graphics card for use in laptops that sports 5,120 CUDA cores and 12 GB of ECC GDDR6 VRAM. Brought into existence in 2023, this graphics adapter leverages TSMC's 5 nm process and Nvidia's Ada Lovelace architecture to achieve higher-than-average performance combined with moderate power consumption. The Nvidia-recommended TGP range for the card is very wide at 60 W to 140 W leading to bizarre performance differences between different systems powered by what is supposed to be the same product.
Hardware-wise, the RTX 3500 is a cut-down GeForce RTX 4070 Desktop, as far as we can tell. Consequently, both make use of the AD104 chip and have little difficulty running triple-A games at QHD 1440p.
Quadro series graphics cards ship with a different BIOS and drivers than GeForce cards and are targeted at professional users rather than gaming. Commercial product design, large-scale calculations, simulation, data mining, 24 x 7 operation, certified drivers - if any of this sounds familiar, then a Quadro card will make you happy.
Architecture and Features
Ada Lovelace brings a range of improvements over older graphics cards utilizing the outgoing Ampere architecture. It's not just a better manufacturing process and a higher number of CUDA cores that we have here (up to 16,384 versus 10,752); under-the-hood refinements are plentiful, including an immensely larger L2 cache, an optimized ray tracing routine (a different wat to determine what is transparent and what isn't is used), and other changes. Naturally, these graphics cards can both encode and decode some of the most widely used video codecs, AVC, HEVC and AV1 included; they also support a host of Nvidia technologies, including Optimus and DLSS 3, and they can certainly be used for various AI tasks.
The RTX 3500 Ada features 40 RT cores of the 3rd generation, 160 Tensor cores of the 4th generation and 5,120 CUDA cores. Multiply those numbers by 1.15 and what you get looks exactly like a desktop RTX 4070: 46, 184 and 5,888, respectively. Elsewhere, the graphics card comes with 12 GB of 192-bit wide ECC GDDR6 memory for a very healthy throughput of ~432 GB/s. Error correction can be turned off if desired. The fact that error correction is present here proves that the RTX 3500 Ada is indeed targeted at professional users.
Just like Ampere-based cards, the RTX 3500 makes use of the PCI-Express 4 protocol. 8K SUHD monitors are supported, however, DP 1.4a video outputs may prove to be a bottleneck down the line.
Performance
While we have not tested a single system featuring an RTX 3500 Ada Generation as of February 2024, we have plenty of performance data for the RTX 4070 Desktop, a graphics card that's about 20% superior to the RTX 3500 Ada Generation. Based on that, we fully expect the RTX 3500 to deliver:
a Blender 3.3 Classroom CUDA score of around 32 seconds
a 3DMark 11 GPU score of around 44,000
around 90 fps in GTA V (1440p - Highest settings possible, 16x AF, 4x MSAA, FXAA)
around 50 fps in Cyberpunk 2077 (1440p - High settings, Ultra RT, "Quality" DLSS)
Nvidia's marketing materials mention "up to 23 TFLOPS" of performance, a 15% improvement over 20 TFLOPS delivered by the RTX 3000 Ada Generation.
Your mileage may vary depending on how competent the cooling solution of your laptop is and how high the TGP power target of the RTX 3500 is. One other thing worth mentioning is that enabling error correction appears to reduce the amount of video memory that is available to applications and games by up to a gigabyte.
Power consumption
Nvidia no longer divides its laptop graphics cards into Max-Q and non-max-Q models. Instead, laptop makers are free to set the TGP according to their needs, and the range can sometimes be shockingly wide. This is the case for the RTX 3500, as the lowest value recommended for it sits at just 60 W while the highest is more than two times higher at 140 W (this most likely includes Dynamic Boost). The slowest system built around an RTX 3500 Ada can easily be 60% slower than the fastest one. This is the kind of delta that we've been seeing on consumer-grade laptops featuring the latest GeForce RTX cards.
Last but not the least, the improved 5 nm process (TSMC 4N) the RTX 3500 is built with makes for very decent energy efficiency, as of mid 2023.
The Nvidia T1200 Laptop GPU (or Quadro T1200 for laptops) is a professional mobile graphics card that is based on the Turing architecture (TU117 chip). Compared to the consumer GTX 1650 Ti, the T1200 features more CUDA cores / shaders (1024 versus 896). The Quadro T2000 uses the same TU117 chip, but features all 1024 cores (2x to the T1000) and is therefore significantly faster. The chip is manufactured in 12nm FinFET at TSMC. The T1200 was introduced as a refresh to the Quadro T1000 together with the new Ampere RTX A workstation cards like the faster Nvidia RTX A2000.
It is available in different variants from 35 - 95 Watt (TGP) with different clock speeds (and performance). The GPU supports DisplayPort 1.4 and HDMI 2.1 for external connections.
There is no more Max-Q variant (formerly used for the low power variants) but every OEM can choose to implement Max-Q 3.0 technologies (Dynamic Boost, WhisperMode).
The Turing generation did not only introduce raytracing for the RTX cards, but also optimized the architecture of the cores and caches. According to Nvidia the CUDA cores offer now a concurrent execution of floating point and integer operations for increased performance in compute-heavy workloads of modern games.
Furthermore, the caches were reworked (new unified memory architecture with twice the cache compared to Pascal). This leads to up to 50% more instructions per clock and a 40% more power efficient usage compared to Pascal. In contrary to the faster Quadro RTX cards, the T1000 and T2000 do not feature raytracing and Tensor cores.
When configured as a slow 35W variant, the T1200 is also suited for thin and light laptops.
The Nvidia Quadro RTX 6000 for laptops is a professional high-end graphics card for big and powerful laptops and mobile workstations. It is based on the same TU102 chip as the consumer GeForce RTX 2080 Ti. Compared to the desktop RTX 6000, the mobile variant offers lower clock speeds.
The Quadro GPUs offer certified drivers, which are optimized for stability and performance in professional applications (CAD, DCC, medical, prospection, and visualizing applications). The performance in these areas is therefore much better compared to corresponding consumer GPUs.
Features
NVIDIA manufacturers the TU102 chip on a 12 nm FinFET process and includes features like Deep Learning Super Sampling (DLSS) and Real-Time Ray Tracing (RTRT), which should combine to create more realistic lighting effects than older GPUs based on the company's Pascal architecture (if the games support it). The Quadro RTX 6000 is also DisplayPort 1.4 ready, while there is also support for HDMI 2.0b, HDR, Simultaneous Multi-Projection (SMP) and H.265 video en/decoding (PlayReady 3.0).
Performance
Due to the lower clock speeds, the mobile RTX 6000 lags slightly behind the desktop version with the same name. Nvidia states for example that a desktop system using the RTX 6000 is on average 13% faster in the SPECviewperf13 4k benchmark.
Due to the extremely high power consumption of 200 Watt (TDP), the mobile Quadro RTX 6000 needs an excellent cooling solution and will be used only in big laptops.
Average Benchmarks NVIDIA Quadro RTX 6000 (Laptop) → 0%n=
- Range of benchmark values for this graphics card - Average benchmark values for this graphics card * Smaller numbers mean a higher performance 1 This benchmark is not used for the average calculation
Game Benchmarks
The following benchmarks stem from our benchmarks of review laptops. The performance depends on the used graphics memory, clock rate, processor, system settings, drivers, and operating systems. So the results don't have to be representative for all laptops with this GPU. For detailed information on the benchmark results, click on the fps number.