Technology

Amazon isn’t Convinced by AMD’s Small Zen 4c Cores in Multicore Bergamo CPUs

Amazon isn’t Convinced by AMD’s Small Zen 4c Cores in Multicore Bergamo CPUs

Amazon Web Services is delighted to use AMD’s 96-core Epyc processors in its datacenters and even released an instance based on the chips this week, but the cloud giant isn’t so convinced about the Zen designer’s 128-core Bergamo parts.

“We don’t chase the core count as much with AWS,” said David Brown, VP of AWS Elastic Compute Cloud.

The comment contradicts common knowledge, which holds that greater core counts are valued by cloud operators because they allow more virtual machines and containers to run on a single server, increasing the revenue potential of each box in a rack.

This is the foundation around which Ampere Computing has built its business. In 2020, the business announced the release of an Arm-compatible datacenter CPU with up to 80 cores geared for cloud-native workloads and integer performance. The processors rapidly attracted the attention of major cloud providers such as Oracle, Microsoft, Google, Tencent, Alibaba, and Baidu, to name a few. The chipmaker then released a 128-core variation in 2021 and a 192-core part this spring.

AMD attempted a similar feat with the release of the Bergamo Epyc processors, which can be configured with up to 128 Zen 4c x86 cores, and up to 256 per node in a two-socket server. This density comes at the expense of a slightly slower clock speed.

AWS, the world’s largest public cloud provider, isn’t interested for the time being.

“What you have to consider is what else you have to put in the server,” Brown said. “Servers are built with a specific amount of RAM per core. With larger core counts, many other aspects of the server become prohibitively expensive, and the transition to DDR5 presents even more issues.”

According to Brown, AWS wants to standardize around the CPU, whether it’s Intel’s, AMD’s, or its own built Graviton hardware, which has specs similar to Amperes. As a result, the cloud colossus may concentrate on fine-tuning the server to its mission.

“When you see a general purpose, high performance, and memory-optimized [instances], it’s really the same chip across all of those,” Brown explained. “The only difference between those three is the amount of memory you get per CPU.”

AWS can customize the memory bandwidth and capacity to meet client expectations by changing the number or capacity of DIMMs.

Brown also mentioned that the number of cores in general-purpose processors has been increasing. He claimed that five years ago, AWS was purchasing 24 and 32 core parts. It is now purchasing AMD’s general-purpose Epyc silicon with 96 cores. While these processors have fewer cores than Bergamo, they are more powerful in terms of clock speed and cache per core.

Brown, though, underlined that this idea is not fixed in stone. “I don’t think we have a religious belief that says we don’t like those other ones,” he explained.

While AWS may be skeptical of AMD’s latest Epycs, the cloud provider is definitely in the minority when it comes to the concept of core-optimized CPUs.

It remains to be seen whether Bergamo will assist AMD in replicating Ampere’s success with its Altra series. AMD has been somewhat quiet about which cloud providers intend to use its latest-generation silicon, which is understandable given the time it takes operators to evaluate parts before putting them to use. At least one hyperscaler, Meta, intends to deploy Bergamo with AMD’s Genoa to improve service throughput.