Nvidia has introduced NVLink Fusion, a chip-level interface that extends the company’s proprietary NVLink fabric beyond its processors. Announced at Computex 2025, the new silicon allows third-party CPUs and custom accelerators to share the same low-latency, high-bandwidth path that already links Nvidia GPUs inside rack-scale “AI factories.”
NVLink Fusion moves the technology from a board-to-board connection to a small chiplet that designers can place next to their own compute dies. The link still rides on familiar PCIe signaling, yet it delivers up to 14 times the bandwidth of a standard PCIe lane while preserving memory-semantic access between devices. This scale-up fabric complements Nvidia’s existing Spectrum-X Ethernet and Quantum-X InfiniBand products, which handle scale-out traffic across racks.
Several partners have already signed on. MediaTek, Marvell, Alchip, Astera Labs, Cadence, and Synopsys will supply custom ASICs, IP blocks, or design services that use the new protocol. On the CPU side, Fujitsu plans to pair its forthcoming 2 nm, 144-core Monaka processor with NVLink Fusion, while Qualcomm will attach the interface to its Arm-based server CPU. Both vendors aim to slot their chips into Nvidia’s rack-scale reference systems without giving up direct GPU access.
Hyperscale cloud operators can mix these NVLink Fusion-enabled parts with Nvidia’s own Grace CPUs and Blackwell-class GPUs, then stitch large GPU islands together through 800 Gb/s networking. The result is a modular path to clusters that span thousands or even millions of accelerators without the performance penalties typical of PCIe-only designs.
By licensing a core piece of its stack, Nvidia positions itself as the connective tissue for heterogeneous AI hardware rather than a closed-box supplier. Competitors that struggled to match NVLink’s bandwidth can now ride it instead, but they must do so inside Nvidia’s broader software and networking framework.
Source(s)
Nvidia (in English)