Building an AI assistant that respects your privacy
To be useful, an AI assistant needs access to your most sensitive data. Your emails. Your calendar. Your contacts. Your documents. The entire surface area of your professional life.
This creates an obvious tension. The more access you give an AI, the more useful it becomes. But the more access it has, the higher the stakes if something goes wrong.
Most AI products resolve this tension by burying privacy policies in legal text that nobody reads. We think that's the wrong approach. Here's how we think about privacy at Prio, and why it matters more than most founders realize.
The data problem with AI assistants
When you connect an AI assistant to your Gmail, it can read every email in your account. When you connect it to your calendar, it sees every meeting, every attendee, every note. When you use it for tasks and research, it accumulates a detailed picture of your priorities, relationships, and business operations.
This data is extraordinarily valuable. For the AI (it needs context to be helpful), for advertisers (your professional network and interests are highly targetable), and for anyone who might compromise the system (corporate espionage, credential theft, competitive intelligence).
The question isn't whether to give AI access to your data. If you want an AI assistant, you have to. The question is what happens to that data once the AI has it.
Where your data goes matters
Three things determine how safe your data is:
Where it's stored. Data stored in EU data centers is subject to GDPR, which gives you legal rights over your information. Data stored in the US is subject to FISA Section 702, which allows government access without a warrant for non-US persons. If you're a European founder, this distinction is not academic.
Who can access it. Some AI products use your data to train their models. This means your emails, documents, and conversations become part of a training dataset that influences outputs for other users. Even if the data is "anonymized," reconstruction attacks have demonstrated that individual records can often be re-identified.
How long it persists. Some services retain your data indefinitely, even after you delete your account. Others have clear retention policies with hard deletion guarantees.
The Prio approach
We built Prio with a few non-negotiable principles:
EU data residency. Your data is stored in European data centers, subject to GDPR protections. We don't transfer data to non-EU jurisdictions for processing.
No training on user data. We don't use your conversations, emails, documents, or any other data to train AI models. Your data exists to serve you, not to improve our product at your expense.
End-to-end encryption. Data is encrypted in transit and at rest. Your OAuth tokens for Gmail and Google Calendar are stored encrypted, not in plaintext.
Approval gates on every action. When Prio drafts an email, it shows you the draft and waits for your explicit approval before sending. When it creates a calendar event, you see the details and confirm. Nothing leaves your account without your consent.
The one exception is email auto-archive, which moves newsletters, marketing emails, and social notifications out of your inbox automatically. But even this is conservative: invoices, bills, calendar invites, client emails, and security alerts always surface for your review.
Transparent data access. You can see exactly what data Prio has access to, and revoke that access at any time. Disconnecting Gmail or Google Calendar immediately stops all data flow. There's no retention period and no hidden caching.
Why approval gates matter
The most dangerous AI assistant is one that acts autonomously without oversight. OpenClaw, the viral open-source agent, has demonstrated this risk. Users give it broad access to email and messaging, and it executes actions without consistent approval flows. This is convenient until it sends the wrong email to the wrong person.
Our approval model works differently:
- You describe what you want: "Reply to Sarah's email about the project timeline"
- The AI drafts a response and shows it to you inline
- You read the draft and tap Approve, Edit, or Reject
- Only after approval does the email send
This adds maybe 3 seconds to each action. But those 3 seconds prevent the nightmare scenario where an AI sends a poorly worded email to your biggest client or creates a calendar invite with wrong details.
For batch operations like email triage, all categorized emails are visible in a queue where you can review and override any decision before it takes effect.
GDPR compliance isn't optional
If you're a European founder, GDPR compliance isn't a nice-to-have. It's a legal requirement that carries fines of up to 4% of global annual revenue or 20 million euros, whichever is higher.
Your AI assistant needs to support:
Right to access. You can request all data the system holds about you at any time.
Right to erasure. You can request deletion of your data, and the system must comply within 30 days.
Data portability. You can export your data in a standard format to move to another service.
Lawful processing basis. The service needs a legal basis for processing your data (consent or contract performance, not "legitimate interest" stretched beyond recognition).
Data processing agreements. If the AI service uses sub-processors (cloud providers, API services), each one needs a DPA in place.
We handle all of this. But many AI assistant products, particularly those built in the US market, treat European privacy law as an afterthought.
What to ask before connecting any AI to your data
Before giving any AI assistant access to your email, calendar, or documents, ask:
- Where is my data stored? (EU vs. US makes a real legal difference)
- Is my data used to train models? (If yes, your competitive intelligence is in the training set)
- What happens when I delete my account? (Hard delete vs. indefinite retention)
- Can the AI act without my approval? (Autonomous action = risk)
- What encryption is used? (At rest and in transit, separately)
- Who are the sub-processors? (Your data is only as secure as the weakest link)
If a product can't answer these clearly, that tells you something.
Privacy isn't a feature, it's a foundation
We didn't add privacy features to Prio as a marketing differentiator. We built the architecture around privacy from the start because we're European founders who use this product ourselves. Our emails, our calendars, and our business data run through the same system.
When your AI assistant has access to everything, "move fast and break things" is not an acceptable engineering philosophy. The cost of a privacy breach isn't a PR problem. It's a business-ending event.
That's why every architectural decision, from data storage to action execution to third-party integrations, starts with the question: what's the worst that could happen, and how do we prevent it?
Prio is built in Europe, for European founders. EU data residency, GDPR compliance, no training on user data, approval gates on every action. Try it free.