Facebook is helping Intel in designing Cooper Lake for deep learning
Working For Notebookcheck
Are you a techie who knows how to write? Then join our Team! English native speakers welcome!
News Writer - Details here
At the Open Compute Project (OCP) Global Summit 2019, Intel's VP of Data Center Group and GM of the Cloud Platforms Group, Jason Waxman took to stage to share details about upcoming Cascade Lake and Cooper Lake server platforms. The 14 nm Cooper Lake platform will feature support for the Bfloat 16 format for deep learning learning.
Facebook is helping Intel in incorporating the Bfloat 16 format in Cooper Lake. According to Intel, Bfloat 16 offers the same dynamic range as a standard 32-bit floating point representation and can help accelerate training AI models on speech recognition, image identification and classification, machine translation, and recommendation engines.
Intel is looking to scale these platforms into four and eight-socket designs (designed in collaboration with Inspur) from the current two-socket design. The next generation Xeon Scalable processors can have up to 112 cores and 224 threads in a four-socket server and will be able to address 12 TB of Intel's DC Persistent Memory Modules (DCPMM).
Cooper Lake will face some tough competition from the 7 nm AMD EPYC lineup. While Intel will be able to offer 112 cores on a four-socket platform sometime in H2 2019, AMD is already having EPYC CPUs that can offer 128 cores and 256 threads on a dual-socket motherboard.
The new Xeon Scalable cloud-optimized reference design will be implemented by Dell, HP, Hyve Solutions, Lenovo, Quanta, Supermicro, Wiwynn, and ZT Systems later this year.