Nvidia announces DGX-2H compute server with upgraded Tesla V100 GPUs
Besides gaming GPUs, Nvidia is also taking a serious interest in neural networks, A.I. and compute-related tasks. The green team presented the Tesla V100 compute GPUs last year, and the first compute server with multiple Tesla V100 GPUs was released earlier this year. Now, Nvidia is upgrading their initial DGX-2 servers with more powerful CPUs and GPU versions.
The latest compute server model is known as the DGX-2H and it comes with 2X Intel Xeon Platinum 8174 server-grade CPUs (upgraded from Xenon Platinum 8168). These 24-core / 48-thread CPUs clocked at 3.1 GHz can be coupled with up to 1.5 TB of DDR4 RAM and up to 60 TB of NVMe SSD storage. As for the compute-oriented GPUs, Nvidia upgraded the 350W Tesla V100 models with 450W ones that have higher clocks. A total of 16 GPUs with a combined HBM2 memory capacity of 512 GB is housed inside the new DGX-2H and all the GPUs are interconnected through special NVLinks, allowing them to act as a single GPU.
Nvidia claims that the new servers provide a compute throughput of 2.1 PetaFLOPS (up from 1.95 PetaFLOPS), but this upgrade also requires 12 KW instead of 10. Additionally, the total weight got increased from 340 to 360 lbs and the operating temperature was lowered from 35 C to 25 C. There is no info about the price, but it will probably cost more than the original DGX-2, which is currently selling for US$399,000.