Nvidia aims to revolutionize data center connectivity by harnessing light. In March 2025, the company introduced Spectrum-X Photonics Ethernet and Quantum-X Photonics InfiniBand switches. These are designed to connect large “AI factories” across different locations and support millions of GPUs, while reducing energy use and costs. The primary objective is to integrate optical engines with switch chips, thereby eliminating unnecessary electrical components.
Scaling up is the main challenge. When speeds hit 800 gigabits per second or more, copper connections between servers and switches slow things down. Signals lose strength as they travel through boards and connectors. This happens before the signal reaches the optical module. Nvidia states that this loss is approximately 22 decibels on 200-gigabit channels. This means more power is needed, and each port uses around 30 watts. Extra components also increase the risk of failures.
Co-packaged optics, or CPO, changes this setup. By placing the optical engine next to the switch chip, signals are transmitted to the fiber almost immediately. This reduces electrical loss to approximately four decibels and reduces power consumption per port to around nine watts. Nvidia says that, at a larger scale, this approach achieves about 3.5 times better power efficiency, over 60 times stronger signal quality, 10 times more resiliency because there are fewer active parts, and about 30 percent faster setup, as there’s less to build and maintain.
For Ethernet, Spectrum-X Photonics is aimed at large, multi-tenant networks. Nvidia says it offers about 1.6 times more bandwidth per area than regular Ethernet. Options include 128 ports at 800 Gb/s or 512 ports at 200 Gb/s. This reaches 100 Tb/s in total. Larger setups can go up to 512 ports at 800 Gb/s or 2,048 ports at 200 Gb/s. This provides a total of 400 Tb/s.
For InfiniBand, Quantum-X Photonics focuses on 800 Gb/s (gigabits per second) connections and uses liquid cooling, a system that uses a liquid coolant to remove heat. Its top switch has 144 ports and can handle 115 Tb/s of data. It also includes in-network computing, which processes data within the network itself, rated at 14.4 trillion floating-point operations per second. The system utilizes Nvidia’s latest SHARP (Scalable Hierarchical Aggregation and Reduction Protocol) technology to accelerate group tasks across the network.
Nvidia says this generation is twice as fast and five times more scalable for AI networks than the previous one. The scalability improvement plan is closely linked to TSMC’s COUPE platform and advanced packaging methods. In the first phase, optical engines in OSFP modules will reach 1.6 Tb/s. The second phase will use co-packaged optics on the motherboard, capable of 6.4 Tb/s. The third phase aims for 12.8 Tb/s within processor packages, further reducing latency and power consumption. Nvidia expects to launch CPO-based Quantum-X switches in early 2026 and Spectrum-X Photonics later that year, both of which will feature liquid cooling.
Nvidia is partnering with companies including TSMC, Coherent, Corning, Fabrinet, Foxconn, Lumentum, SENKO, SPIL, Sumitomo Electric, TFC, and others to cover manufacturing, optics, and assembly. The goal is to remove thousands of separate parts from large clusters, speed up setup, and make networks with a million GPUs possible without consuming excessive power.
Source(s)
TomsHardware (in English)