THURSDAY, MAY 7, 2026VOL. XXVI · NO. 17
Tech

Anthropic Named It 'Dreaming.' That Word Is Doing a Lot of Work.

When AI companies reach for human metaphors, they're not explaining anything — they're admitting they can't.

By Chasing Seconds · MAY 6, 20263 minute read

Photo · Android Authority

There's a tell in the language.

Anthropics announced at its developer conference that Claude agents can now "dream" — a background process where the system sorts through accumulated "memories" to consolidate and reorganize them. WIRED immediately filed a complaint. Ars Technica explained the mechanics. Android Authority tried to make it sound exciting. And somewhere underneath all three pieces is the same quiet admission, dressed up in different clothes: nobody actually knows what to call this, so they're borrowing the vocabulary of being human.

This is worth sitting with for a moment.

The Metaphor Is the Map

When you call a process "dreaming," you're not describing it. You're reaching for the nearest thing a person might recognize. Sleep, memory, consolidation — these are concepts with centuries of meaning attached to them, and AI companies are now using them as product names. The WIRED piece is essentially a formal objection to this. One writer there begged, with genuine exasperation, for the industry to stop. Android Authority took the opposite approach — lean into it, explain why it's actually a useful analogy. Ars Technica split the difference with that careful "sort of" in the headline, which is doing the most honest work of any phrase in any of these three pieces.

Sort of. That's the tell.

Because here's what's actually happening: Claude agents, when idle, can now process and reorganize information accumulated during work sessions. That's real functionality. According to the coverage, Anthropic also announced that usage limits for Claude Code would double for Pro and Max subscribers — going from five-hour windows to ten. That part nobody needed a metaphor to understand. It's just more time. They said it plainly. But the memory consolidation feature? That got a name from a dream journal.

What the Naming Costs

The WIRED writer's frustration isn't pedantry. It's pointing at something structural. When you name a feature after a human process, you're making an implicit claim about similarity — and then immediately walking it back when anyone presses you. "It's not really dreaming, of course, but..." That disclaimer is always in the room. It has to be. Because the actual mechanism is not dreaming. It's computation that happens to occur between active tasks.

But computation between active tasks doesn't move units. Doesn't generate headlines. Doesn't make someone feel like they're working with something alive.

And that's what this naming convention is actually about. It's marketing dressed as explanation. The companies aren't saying "our AI dreams" because they think it dreams. They're saying it because the alternative — describing what's actually happening in terms accurate enough to be useful — would require them to admit how much they still don't understand about what they've built. The metaphors are a bridge over that gap. They're comfortable. They're relatable. They hide the void.

Android Authority's piece tried to argue that the human framing makes the feature more accessible, and that's not wrong. Accessibility matters. But there's a difference between a helpful analogy and a product name, and Anthropic crossed it. An analogy invites you to approximate. A product name asserts.

The industry has been doing this for a while now — memory, reasoning, attention, hallucination (the one that accidentally told the truth) — and each term carries the same structural problem. It describes the output in human terms while leaving the mechanism unexamined. Which is fine when the mechanism is understood and you're just communicating to a broad audience. It's something else when the mechanism isn't fully understood by the people who built it.

That's where we are. And "dreaming" is where we've arrived.

The word they chose says more about the limits of their certainty than any technical document they've published.

End — Filed from the desk