Why your AI assistant should remember what you did last year
There is a quiet difference between an AI you talk to and an EA who has worked for you for ten years. The EA remembers that you sent flowers from Bloom & Wild for your spouse's birthday last year, and that the dinner at Le Sirenuse went well. They remember which restaurants you avoid. They know that you do not actually like Champagne even though everyone assumes you do. When your spouse's birthday comes up again, they do not start from scratch — they ask "same as last year, or something different?"
Almost no AI assistant works like that today. Most have the technical memory of a goldfish. Each conversation starts cold. Each year's birthday is a brand new prompt. The user does the remembering, the AI does the prompting.
That is the year-over-year recall gap. Closing it is one of the largest leverage points in AI assistant design.
What memory actually means in this context
There are three kinds of memory that matter, and most products only have one or two.
Conversation memory. What was discussed in the current chat, what decisions were made, who was mentioned. This is the easy one. ChatGPT, Claude, Gemini all have versions of this.
User-level facts. Who your spouse is. What your role is. Your dietary preferences. Most modern assistants have a system for this — sometimes the user enters facts manually, sometimes the model infers them.
Action history. What you actually did, when, for whom, how it went. This is the missing piece. Almost no consumer AI tracks this. And it is the kind of memory that makes the difference between "tool you ask" and "assistant who knows."
Why action history is the critical layer
Consider a birthday two weeks out. Without action history, the agent can detect the calendar event and ask "want me to do something?" It can pull a list of memories ("Sara is your spouse"). It can suggest generic things — flowers, a card, dinner.
With action history, the same prompt looks completely different. "Sara's birthday is in 14 days. Last year you sent a tulip bouquet from Bloom & Wild and booked Le Sirenuse for dinner. Want me to do the same thing, or are you up for something different this year?"
The first version is generic. The second is grounded. The first asks the user to think; the second offers a one-click decision the user can either accept or override.
Action history is what turns proactivity from "saw a date on the calendar" to "remembers what works, what failed, what got noticed."
What action history should capture
There is a long list of things that could go in an action history table. The useful subset is shorter than you might think.
The event. Type of event (birthday, anniversary, board meeting, contract renewal, conference). Who or what it was for. The date.
The action. What happened (email sent, gift ordered, reservation booked, calendar event created, message drafted). Vendor or service if relevant ("Bloom & Wild," "Le Sirenuse"). Amount if commercial.
The outcome. Did the user accept the suggestion? Was there positive feedback after? Was there a negative — the gift was returned, the restaurant was cancelled, the message got a bad reply?
The link to the action. Pointer back to the underlying email, calendar event, or pending action. So the user can drill in and the agent can use the context.
That is enough. With this, year-over-year recall becomes a database query. "What did I do for kind=birthday subject=Sara in the last 18 months?" returns the answer.
What changes when the agent can remember
Three things change, and they compound.
Suggestions become specific. The agent stops asking "want me to plan something?" and starts asking "want me to repeat what worked last year?" Cognitive load drops dramatically. The user is no longer thinking from scratch; they are confirming or overriding.
The agent can detect patterns over time. If the same vendor was used three years in a row and then dropped, that is a signal. If a particular kind of action ("send flowers") fades over years, the agent can stop suggesting it. If outcomes were negative (returned, cancelled, complained about), the agent can adjust.
The agent can offer to graduate. Once a particular pattern shows high acceptance over enough samples — say, 20 birthday "repeat last year" decisions with 90% acceptance — the agent can ask: "Want me to auto-prepare future ones? You will still confirm before anything sends." This is the autonomy graduation path most products skip.
The non-obvious data engineering
Building action history sounds simple. Add a table, write to it after actions complete. The non-obvious parts are subject keys and outcome tracking.
Subject keys. "Sara" and "Sara Smith" and "sara@example.com" all need to resolve to the same person across years. That requires a contact graph and stable identifiers. The simple version: store name + email + optional contact_id pointing to your contacts table. Match year-to-year by lowercased name OR by email. More sophisticated: build a person graph with aliases.
Outcome tracking. Whether an action was "accepted" is the easy part — the user clicked approve. Whether the action turned out WELL is harder. The signals are usually indirect: the recipient replied positively. The reservation did not get cancelled. The follow-up went smoothly. Some of this can be detected automatically; some has to come from the user (a thumbs-up after the fact). At minimum, log the binary "accepted" signal so the agent can stop suggesting things you reject every time.
Privacy and scope. Action history is sensitive. Who you sent flowers to and what you spent is personal. Treat it like a personal CRM — never share, never train on, easy to export, easy to delete.
What it looks like for non-personal cases
The same pattern applies way beyond birthdays. Consider:
Board meetings. "Q3 board meeting in 14 days. Last quarter you sent the deck 7 days ahead, used these metrics in the update, and the open action items from last meeting were X, Y, Z. Want me to start the same prep?"
Contract renewals. "Salesforce renewal in 30 days. Last year you renewed at the same tier after negotiating a 10% discount. Want me to start that conversation again?"
Hiring loops. "Engineering manager candidate interview tomorrow. Last time you ran this loop you used these questions, these debrief notes captured what mattered. Want the same structure?"
Investor updates. "Monthly investor update due Friday. Last month you covered MRR, headcount, top 3 wins, top 2 challenges. Pull the latest numbers and draft the same structure?"
In each case, the agent goes from "what would you like" to "do this again, with these specific updates." The cognitive cost of running these recurring high-stakes events drops dramatically.
Why almost no consumer AI does this yet
Three reasons. First, action history is hard to build. You need stable subject keys, you need to capture outcomes, you need to write to it from every flow that does something the user might want to recall. It is a cross-cutting concern, not a feature.
Second, it requires the rest of the stack to be in place. If the agent does not actually do actions on the user's behalf — drafts emails, books reservations, files renewals — there is nothing to log. Consumer AI is mostly chat, and chat does not generate logs of decisions in the world.
Third, it is invisible to users until it suddenly is not. The user does not see a "memory" they can interact with directly. They just experience the difference between an agent that knows them and one that does not. By the time the agent's memory is a feature you can advertise, you have already won.
The takeaway
If you are building an AI assistant, action history is one of the highest-leverage pieces of infrastructure you can build. Not because users will ask for it — they will not — but because it is what makes the difference between an AI that feels like a tool and one that feels like an assistant.
If you are using one, watch for it. The next time the agent surfaces a birthday or a board meeting, ask yourself: does it remember what I did last time? If the answer is no, the agent is starting from scratch on the same question every year. That is a signal you have a tool, not an assistant.
We covered the anticipation gap and the trust ladder in earlier articles. Action history sits underneath both. Without it, anticipation is generic and trust never compounds. With it, the system gets quietly smarter every year you use it.