Back to blog

The anticipation gap: why most AI agents still feel reactive

Prio|May 5, 2026|5 min read
ai agentproactive aiconsumer aiai assistantanticipation gap
The anticipation gap: why most AI agents still feel reactive

You can ask any AI to draft an email, summarise a meeting, compare flights, or fix a bug. The capability is there. So why does using an AI assistant still feel like managing one more thing?

The answer is what we call the anticipation gap. Capable AI is not the same as a useful AI assistant. The breakthrough product is the one that knows when to show up, when to ask, and when to shut up — without you having to remember it exists.

What the anticipation gap actually is

Today most AI agents are reactive. You open them, you tell them what you want, they try to do it. That sounds like agency. It is, technically. But it puts the hardest job on your shoulders.

You have to:

  • Notice the task in the first place
  • Remember the agent exists
  • Translate the task into a prompt
  • Decide how much permission to grant
  • Supervise the result

For a two-minute task, that is more work than just doing the thing yourself.

The anticipation gap is the distance between "I can ask the agent to do this" and "the agent shows up when this matters." Closing it is the difference between a tool and an assistant. A tool waits for you to remember it. An assistant reduces the number of things you have to remember.

Why coding agents got this right and consumer agents have not

Coding agents like Cursor, Claude Code and Codex feel different from consumer AI for a reason. Three reasons, actually.

Coding has clean verification. Tests pass or they fail. The compiler tells you when something is broken. The agent has a yes-or-no signal it can use to know whether it succeeded.

Coding has bounded scope. "Fix this bug" gives the agent a repo, an error, a task and a target. "Plan a trip" sounds simple but explodes into family preferences, calendar constraints, cancellation tolerance, hotel choice, car rental, ten more things.

Coding output is review-friendly. A diff is easy to glance at. A booked dinner reservation is harder to verify without going.

Consumer life has none of those. There is no compiler for taste. There is no test suite for life admin. And the user is rarely sitting at their desk thinking "this would be a great moment to invoke my AI."

What real anticipation looks like

When we say anticipation we do not mean magic. We do not mean the agent guesses everything correctly and starts running your life. We mean the product moves from "you ask me to do X" to "this is the moment when X matters, do you want me to handle it?"

A few examples of real anticipation:

  • Your flight gets delayed. The agent says "there is a later flight that still gets you there tonight, want me to switch?"
  • Your kid's school sends an email. The agent says "we do need permission for that field trip Friday, I pulled the form up for you."
  • A board meeting is two weeks away. The agent says "want me to start prep — pull last quarter's metrics and draft an update?"
  • A SaaS tool is auto-renewing in 30 days. The agent says "you used it twice last quarter. Worth keeping?"

Notice what changes. It is not that the user is remembering the agent and calling them. The situation is calling the agent into existence.

The bar that has not been cleared yet

Plenty of products claim proactive AI. Most do not actually clear the bar. A few patterns we see:

Fake proactivity that triggers on bad data. The agent looks at your calendar, assumes every event is real, and starts nudging you about meetings you do not have. The bar is not "be proactive." The bar is real lived proactivity — understanding enough context to know what actually matters.

Single-source detectors. A meeting prep agent only reads your calendar. An email triage agent only reads your inbox. Neither sees the school email AND the work thread AND the half-finished grocery list together — which is where the real moments live.

No memory across time. The agent has no idea that last year you sent flowers from Bloom & Wild for this same birthday and booked Le Sirenuse. So every year it asks from scratch like you have never met before.

No calibration. Same nudges, same priority, regardless of whether you act on them or dismiss them. The agent never learns that you do not actually care about LinkedIn networking nudges and keeps sending them anyway.

What it takes to close the gap

There is no single fix. Closing the anticipation gap is a system design problem with at least five layers:

  1. A look-ahead detector that scans 1 to 30 days out for events that benefit from lead time.
  2. Action memory so the agent knows what you did last year for the same situation.
  3. A confidence ladder that picks whisper, push, or ring based on the priority of the moment.
  4. Outcome learning that calibrates per detector per user — if you keep dismissing networking nudges, those get demoted automatically.
  5. Intent calibration that captures how aggressive your planning style is, so a "month-ahead planner" gets different lead times than a "day-of executor."

Each one is small. Together they are the difference between yet another inbox to manage and an assistant that actually reduces what you carry.

The trust contract on top

There is one more layer that matters more than the technical pieces. Even when the agent gets the moment right, the user needs to trust it. That means visible decisions, a permission ladder, easy revoke, and zero silent expansion of scope.

We wrote a separate article about the five-step trust ladder for AI agents that goes deep on this. The short version: most products jump straight to "agent does the thing autonomously" and break user trust the first time it gets something wrong. The right path is to climb the ladder slowly — read, suggest, draft, act with confirmation, autonomous — and let the user grant each step explicitly.

Why this matters now

The frontier is no longer "can AI answer." It is not even "can AI act." The frontier is whether AI can do useful work without pulling you into a new management layer. Right now, almost no consumer agent meets that bar.

The product that does will feel less like ChatGPT and more like a chief of staff who has known you for ten years. It will know your travel patterns, your communication style, what you did last year for your spouse's birthday and what your reaction was when you tried something different. It will surface the right thing at the right time, with the right level of commitment, asking before consequential moments and getting out of the way otherwise.

That is the breakthrough. Closing the anticipation gap.

If you are building an AI assistant or evaluating one, this is the question to ask. Not "what can it do?" but "when does it show up — and is it ever the moment it would have helped most?"

Stop managing.

Start building.

Sign up and start in under a minute.

Get started