VMs

Inside the cloud’s shift to Arm: Why hyperscalers and the industry are making the switch

Published

How Arm is redefining cloud performance — and how developers are already building on it

If you thought that Arm was just for mobile applications, it’s time to take another look. The power-efficient, performant architecture that led the evolution of mobile devices is driving a strategic shift in the cloud, and processors built around the Arm Neoverse platform are taking on some seriously heavy workloads for all major cloud hyperscalers.

Arm Neoverse now sits at the center of modern cloud infrastructure and industry projections suggest this trend is accelerating. Arm estimates that half of compute shipped to top hyperscalers in 2025 will be Arm-based, enabling hyperscalers to optimize everything from general-purpose computing to AI and data-intensive workloads while lowering total cost of ownership.

Take Google, whose systemic multi-architecture(multiarch) push is changing the economics of cloud computing. Recently, Google revealed that it has already ported more than 30,000 of its applications to Arm, including YouTube, Gmail, and BigQuery. Another 70,000 Google applications are currently in the conversion queue. This level of migration is not surprising when you consider that Google Axion CPUs, powered by Arm, delivers up to 65 percent better price-performance than x86 instances and can be up to 60 percent more energy efficient. Efficiency of these numbers, combined with the scale of Google’s code migration project, suggest the web giant may rely on fewer x86 processors in the years ahead.

Google has also expanded its Arm-powered Axion lineup with the new N4A virtual machines, the most cost-effective N-series VMs to date, delivering up to 2x better price-performance and 80 percent better performance-per-watt than x86-based offerings.

The improvements keep coming. Independent analysis from consulting firm Signal65 shows that AWS’s Arm Neoverse-powered Graviton4 processors are not only leading the competition on price-performance, but also significantly outperforming comparable Amazon EC2 x86 offerings across general-purpose and AI workloads. For XgBoost Machine Learning training, Graviton4 achieved up to 53 percent faster training times than x86, while delivering up to 64 percent better price-performance. Database performance with Redis showed Graviton4 handling up to 93 percent more operations per second compared to x86-based instances.

Similarly, Microsoft Azure’s Arm Neoverse-based Cobalt 100 processors continue to deliver strong performance across both general-purpose and AI workloads. In general-purpose testing, Cobalt 100 achieved up to 48 percent higher performance and 91 percent better price-performance in database workloads, 53 percent and 99 percent gains in secure networking, and 47 percent and 89 percent improvements in financial modeling compared to x86-based Azure instances. For AI workloads, Arm benchmarks show that Cobalt 100 delivers up to 1.9 times higher performance and 2.8 times better price-performance when running large language models (LLMs) with ONNX Runtime.

At Microsoft Ignite, Microsoft introduced Azure Cobalt 200, its next-generation Arm-based CPU, and the most power-efficient platform in Azure to date. Built around real-world workload data, Cobalt 200 delivers up to 50 percent higher performance than Cobalt 100 and integrates the latest security, networking, and storage advancements. With these improvements, Azure customers gain even greater efficiency and performance headroom beyond the already impressive results of Cobalt 100.

Performance you can spend

These cloud-based improvements are creating notable gains for cloud-native companies. Cloud analytics leaders likeDatabricksand Snowflake have adopted Cobalt 100 to optimize their cloud footprint and performance. Spotify reported roughly 250 percent better performance on Axion-backed VMs, and a 40 percent drop in compute costs, unlocking performance without cost penalties.

Paramount Global sped up a core workload: up to 33 percent faster content encoding on Axion-based C4A, according to benchmarks per Arm. And Pinterest saw 47 percent lower workload costs along with 62 percent lower carbon per API request after moving its core API tier to AWS Graviton instances.

It’s no longer about how to migrate to Arm, but when

Now let’s talk about some of the myths and misconceptions. “But the migration will hurt,” you hear. That is far less true today than it used to be, thanks to the world’s largest computing ecosystem with over 22 million software developers, toolchain maturity, Arm64 support in major frameworks, and detailed developer onboarding resources. Migration to Arm is easy. Uber started small and rolled out more services as it got comfortable with the Arm platform. Now, it’s running thousands of services on Arm-based instances across various cloud providers to gain hardware diversity, price/performance, and efficiency.

Where to get started?

Begin by picking a well-bounded tier, such as stateless microservices, API backends, or stream processing. Compile for Arm64, stand up Arm64 runners, and run A/B at production traffic levels. Track p95 latency to see what’s running slowest, analyze the cost per request, and measure watts per request, not just CPU percent.

When you’re happy, roll the pattern to the next ten services. And the next ten. And so on.

If you want a lighter on-ramp, Arm’s Cloud Migration program and developer initiative bundle tools, learning paths, and expert guidance. Kubernetes and cloud-native tooling, while Arm’s migration resources cover compatibility and validation. GitHub integration is first-class through Arm’s GitHub Copilot extension for migration support, along with the GitHub migration agent and the Arm MCP server, which help streamline and validate Arm64 transitions at scale. Enterprises can also engage Arm experts directly to smooth the transition.

Arm’s ecosystem partnerships, from hyperscalers to ISVs, continue to lower the migration barrier, making it easier to validate, optimize, and scale Arm deployments in production.

Efficiency is how the cloud moves forward

Arm in the cloud is a pragmatic way to buy back performance, power, and budget while keeping your options open. As AI inference, edge-cloud hybrids, and cost constraints define the next cloud era, Arm’s architecture offers a scalable, efficient foundation for what’s coming next.

Sponsored by Arm.