Intel has unveiled its latest products for the data centre, and it includes an answer to the growing threat from AMD's Zen-based Epyc processor family: A 56-core Xeon Platinum chip with 12-channel memory controller.
After losing considerable ground to its rival during the Bulldozer architecture era, the launch of AMD's Zen architecture signalled a turnaround for the company. While it still commands only an extreme minority share of the lucrative data centre market, the company's Epyc processors have been slowly gaining ground against rival Intel's Xeon family - in particular in single-socket systems, where Epyc crams more cores and simultaneous threads than its competition.
Intel's answer: A broad selection of new processors, including the hefty Cascade Lake-based Xeon Platinum 9282: A chip offering 56 cores, 112 simultaneous threads, and a 12-channel DDR4-2933 memory controller. The chip also includes support for dual-socket systems, providing 112 cores and 224 threads per motherboard, runs at a 2.6GHz base and 3.8GHz Turbo Boost frequency, includes a whopping 77MB of L2 cache memory, and four Intel Ultra Path Interconnect (UPI) links, alongside support for Intel's Virtualisation Technology (VT-x), Transactional Synchronisation Extensions New Instructions (TSX-NI), two AVX-512 fused multiply-add (FMA) units, and Intel's Run Sure reliability, availability, and serviceability functionality.
All that performance comes at a cost, of course, and while Intel has yet to share pricing on the chip, which is likely to be at 'if you have to ask you can't afford it' levels, it has stated that the 14nm part has a hefty thermal design profile (TDP) of 400W. For those looking for something a little more manageable, the Xeon Platinum 9242 drops down to a 350W TDP in exchange for "only" 48 cores and 96 threads per socket and a reduction to 71.5MB of L2 cache.
An interesting new feature across Intel's Xeon Scalable family, meanwhile, aims at addressing the growing trend for companies to rely on non-Intel accelerators for deep-learning workloads: Deep Learning Boost (DL Boost), a new function of the processors which is designed to accelerate artificial intelligence (AI) inference workloads that would otherwise have required a general-purpose GPU (GPGPU) or dedicated accelerator to reach acceptable performance levels.
'Today's announcements reflect Intel's new data-centric strategy,' claims Navin Shenoy, Intel executive vice president and general manager of the company's Data Centre Group. 'The portfolio of products announced today underscores our unmatched ability to move, store and process data across the most demanding workloads from the data center to the edge. Our 2nd-Generation Xeon Scalable processor with built-in AI acceleration and support for the revolutionary Intel Optane DC persistent memory will unleash the next wave of growth for our customers.'
Other announcements made by Intel at the event include network-optimised Xeon Scalable models, the Intel Xeon D-1600 system-on-chip (SoC) for edge processing tasks, a new 10nm Agilex FPGA family, a dual-port Intel Optane SSD DC D4800X card with built-in redundancy, new Optane DC persistent memory parts, and the company's first quad-level cell (QLC) flash SSD for the data centre. Some of these announcements, however, were paper launches: Only the second-generation Xeon Scalable chips, the Xeon D-1600, the new Intel Optane DC, and Intel QLC SSD are available today; the Agilex FPGAs will being sampling in the second half of the year with no word on mass availability, and the Xeon Platinum 9200 family of many-core chips will begin sampling in the first half of the year with production ramp-up in the second half. The Intel Optane DC SSD D4800X, meanwhile, has no launch date yet set.
November 18 2019 | 09:00