FRIDAY, APRIL 24, 2026VOL. XXVI · NO. 17
Tech

Meta Went Shopping for CPUs. Nvidia Should Notice.

When one of the biggest AI spenders starts routing around GPUs, the shortage narrative starts to look like a story we told ourselves.

By Chasing Seconds · APRIL 24, 20263 minute read

Photo · TechCrunch

There's a version of the last two years where Nvidia simply won. The chips were scarce, the waitlists were real, and anyone building anything serious in AI was burning through green silicon as fast as they could source it. That version isn't wrong, exactly. But a writer at TechCrunch has flagged something that complicates it: Meta has signed a deal for millions of Amazon's homegrown CPUs — not GPUs — specifically for AI agentic workloads.

Sit with that for a second. Not GPUs. CPUs. From Amazon's own silicon program. At scale.

The Shortage Was Also a Story

Every technology panic has a narrative layer — the version of events that gets repeated until it feels like physics. The GPU shortage had one: there weren't enough chips, Nvidia was the only game that mattered, and the future would be decided by who could stockpile more. That story served a lot of interests simultaneously. It justified enormous capital expenditure. It kept Nvidia's valuation stratospheric. It gave every AI startup a ready-made excuse for why nothing was shipping yet.

What it obscured was the engineering question underneath: do all AI workloads actually need a GPU? The answer, it turns out, is no. And the companies with enough scale to run serious experiments on that question are now publishing their results through procurement announcements instead of academic papers.

Meta routing agentic AI workloads to Amazon's CPUs isn't a cost-cutting gesture. It's a signal that someone at that level of compute consumption looked at what agentic tasks actually require — inference, orchestration, the kind of work that's more about moving information intelligently than about raw parallel computation — and decided a different chip fits better. That's not a retreat from AI ambition. That's what AI ambition looks like when it matures past the hype phase.

What Chip Diversity Actually Means

For most of the recent AI build-out, diversity in the chip stack was a talking point. Every quarter brought announcements about AMD alternatives, custom silicon programs, and startups promising to out-efficiency Nvidia at a fraction of the cost. Most of it stayed theoretical, or lived at the margins. The serious money kept flowing to the same address.

A deal of this scale — millions of units, one of the highest-volume AI spenders in the world, Amazon's own architecture — is different in kind, not just degree. It suggests the diversity is becoming structural. Amazon built chips. Meta is buying them. The infrastructure is real enough now that procurement decisions can actually reflect workload requirements rather than just availability.

That's the shift worth watching. Not whether Nvidia loses its dominance overnight — it won't — but whether the assumption that one chip type handles all of AI starts to visibly crack. Once the largest buyers start segmenting their chip strategies by workload, the pressure on every vendor to specialize rather than generalize increases. The market stops rewarding whoever has the most chips and starts rewarding whoever has the right ones.

Nvidia built a remarkable position by being indispensable during a period when nobody had time to ask hard questions. The hard questions are starting.

The GPU shortage was real. So was the story built on top of it — and stories, unlike chip supplies, don't resolve themselves just because the situation changed.

End — Filed from the desk