Back to blog

The five-step trust ladder for AI agents

Prio|May 5, 2026|7 min read
ai trustai permissionsautonomous aiai agentai safety
The five-step trust ladder for AI agents

A lot of AI agent products want to be autonomous. The pitch is always some version of "tell me what you want and I will handle it." It sounds magical. It is also where most products break their users' trust permanently.

The problem is not that autonomy is wrong. It is that autonomy is the top of a ladder, and most products try to start there. The right path is to climb the ladder one step at a time, with the user explicitly granting each step.

This is the model we use at Prio, and it is the model the breakthrough consumer AI assistant will need. Five steps. Every step is a real product question.

Step 1: Read

The lowest trust step. The agent can see something. It can read your file, your email, your calendar, your screen. It does not act, suggest, or write — it just understands.

This is where most products start, and it is the right place to start. Connecting Gmail or Google Calendar is a much smaller commitment than letting an agent send mail on your behalf. Read access is reversible — you can revoke a token. Read access creates almost no risk to the user beyond privacy concerns, which the product needs to address separately.

What makes this step work: clean, narrow scopes (read inbox vs read all of Gmail vs send mail are three different things). Transparent about what the agent reads and when. No surprise widening of scope.

Step 2: Suggest

The agent surfaces something proactively. "This email matters. You discussed this last time. You said you would follow up." The agent makes a proposal — but the user remains entirely in charge. No action has been taken in the world.

Suggestions are how anticipation enters the picture. A suggestion that lands well saves the user time even if they do nothing more than read it and act on it themselves. A suggestion that lands badly is mostly noise, but the user can ignore it.

The bar here is salience. A bad suggestion is not just useless — it slowly trains the user to ignore the channel. We have written before about the anticipation gap, and the core point applies: agents that surface the wrong thing at the wrong time are not proactive, they are noisy.

A useful pattern at step 2 is per-user calibration. If a particular kind of suggestion has a low acceptance rate for a specific user, demote it automatically. The system learns to be quieter.

Step 3: Draft

The agent prepares the action. It writes the email. It builds the schedule. It fills the form. The work is done, but it is staged — the user approves before anything goes out.

Step 3 is where most consumer AI lives today, and there is good reason. A drafted email saves the cognitive load of starting from a blank screen but never sends without your sign-off. A scheduled calendar event is queued but not invited until you approve. The user feels the leverage of an assistant without giving up the final say.

The thing to watch at step 3 is review fatigue. If the agent drafts ten things and you have to approve each one, the cognitive cost adds up. Good draft surfaces batch them — review four pending items in 30 seconds, not one at a time across a fragmented day.

A nuance: drafts can be auto-discarded if the user does not approve within a window. That keeps the queue from accumulating noise from low-value suggestions.

Step 4: Act with confirmation

The agent goes into the world. It can navigate, fill forms, assemble options, prepare a booking — but it asks before consequential moments. The dinner reservation gets all the way to "press to confirm." The flight rebooking is queued with the new times pulled. The user is the last click.

This is the rung where consumer agents start to feel like real assistants. The mental load shifts from "what should I do" to "approve or reject."

Step 4 is also where the trust contract gets concrete. Three things have to be true:

  • The agent NEVER acts without explicit confirmation on consequential things — sending email, booking travel, charging a card, replying to a thread that matters.
  • The user can SEE every prepared action in one place, easy to review, easy to reject.
  • The user can REVOKE the agent's preparation scope at any time, per category, with one click. No "are you sure" dialogs designed to prevent revocation.

When step 4 works it feels like having an executive assistant who has known you for years. Things get done that you did not have to think about. You sign off because the work was done well.

When step 4 breaks — the agent prepared something dumb, or worse, executed something it should have asked about — recovery is hard. We are risk-averse animals; we do not give second chances to systems that burned us.

Step 5: Autonomous

The agent buys, books, sends, signs, replies — without you. You see it after.

A lot of people want to jump straight to step 5. We have learned the hard way that this is wrong. The downstream consequences of breaking trust at step 5 are huge. One bad email sent to a client. One wrongly-charged renewal. One scheduled meeting on a date you would never have agreed to. These are not just embarrassing — they are reputation, money, relationships.

Step 5 should be reserved for narrow, well-validated patterns where the user has explicitly granted autonomy. Even then, every autonomous action should be highly visible — a clear log entry, an undo path within a window, the ability to revoke autonomy on that pattern with one click.

A practical bar for graduating an action to step 5: at least 20 examples of the user accepting it at step 4, with 80%+ acceptance, with a per-pattern grant the user explicitly creates. We never auto-graduate. The user has to opt in.

What goes wrong when products skip steps

The classic failure pattern: a product launches with steps 4 or 5 from day one. The pitch is great. The first month is exciting. Then the agent does something the user did not expect — sends an email with a typo, books the wrong restaurant, reschedules a call to a bad time — and the user pulls back permanently.

The trust the user offered to extend is not just gone. It is harder to win back than if you had never asked for it. We humans are loss-averse. The first negative experience is worth more than ten positive ones.

The other failure pattern: a product never moves past step 2. It keeps suggesting things the user has to act on themselves. Eventually the user realises they could have just used a calendar reminder. The product gets uninstalled.

The right path is to climb the ladder deliberately. Start at step 1 with read access. Move to step 2 with suggestions that are well-calibrated and prove their salience. Move to step 3 with drafts the user can quickly approve. Move to step 4 with the action-with-confirmation surface, on a per-category basis. Then, only after long-running data, propose step 5 to the user — and let them grant it.

What this looks like in practice

At Prio we run all five steps. Each new connected source starts at read. Each kind of suggestion runs at step 2 with per-user-per-type calibration so noisy detectors get demoted automatically. Drafted actions live in a queue the user can clear in seconds. Confirmation surfaces are explicit — never silent.

Step 5 is gated behind an explicit autonomy grant. After 20+ examples of the user accepting "Repeat last year" on, say, birthday-anticipation insights with 80%+ acceptance, the system surfaces a proposal: "Want me to auto-prepare these? You will still confirm before anything sends." If the user says yes, future birthday insights come pre-prepared with a one-click confirm. If they say no, the system never asks again about that pattern. If they grant and later regret it, one click in settings revokes — across the whole pattern.

This is what real autonomy looks like. Earned, granted, scoped, visible, revocable.

The takeaway

If you are building an AI agent, the question to ask is not "how autonomous can we make it." The question is "what step are we on, what are we doing well at this step, and what would have to be true to graduate to the next?"

If you are evaluating an AI agent for your own use, the question to ask is "where does the product start, and how does it earn permission to do more?" If the answer is "it asks for everything up front," that is a red flag. If the answer is "it starts read-only and lets me grant more over time," you are looking at a product that takes the trust contract seriously.

We have written separately about why memory matters in proactive AI and how AI assistants should learn from user behavior. Together with this trust ladder, these are the three pieces that make autonomous AI feel like an assistant instead of a liability.

Stop managing.

Start building.

Sign up and start in under a minute.

Get started