← Back to Blog
· 8 min read

Vault: How ainywhere Keeps Your Data Private

The goal with ainywhere has always been to give you an AI assistant that works like a real assistant — one that knows your schedule, your preferences, your context, and gets things done on your behalf. But a good real assistant would never divulge your personal information, and we believe ainywhere should be no different.

Every day, people share deeply personal information with AI assistants — medical questions, financial details, relationship advice, business strategies. That level of trust demands real protection, not just promises. That’s why we built Vault, our encryption-at-rest system that ensures your conversation data is cryptographically locked away from everyone — including us.

The honest reality of AI encryption

Let’s address the elephant in the room: true end-to-end encryption isn’t possible with AI today. For the AI to understand your question and generate a response, it needs to read your message in plaintext. There’s no way around that.

The most promising solution — homomorphic encryption — would theoretically let an AI process encrypted text without decrypting it. But the current state of the art is far too slow for practical use. We’re talking hours or days per response, not seconds.

So what can we do? Quite a lot, actually.

Proton recently outlined a similar approach with Lumo, their AI assistant. Like us, they acknowledge that the LLM needs cleartext access during processing — but they go to great lengths to ensure data is protected everywhere else. We share that philosophy, and our Vault system follows a comparable architecture.

How Vault works

Vault operates on a simple principle: your data should be encrypted whenever it’s not actively being processed. Here’s how we achieve that.

Your personal key hierarchy

When you first use ainywhere, the system generates a User Master Key (UMK) — a unique 256-bit cryptographic key that belongs solely to you. This is the key that encrypts and decrypts all of your data: messages, conversation history, generated images, and more.

But here’s where it gets interesting. The UMK itself is never stored in plaintext. Instead, it’s wrapped (encrypted) using a Key Encryption Key (KEK) derived from your identity — your phone number, email address, or other channel credential. Without that identity, the UMK can’t be unwrapped, and without the UMK, your data can’t be decrypted.

This creates a layered key hierarchy:

  1. Your identity (phone, email, etc.) → derives a KEK
  2. The KEK → unwraps your UMK
  3. The UMK → decrypts your actual data

Every layer uses industry-standard cryptography: HKDF-SHA-256 for key derivation, and AES-256-GCM for both key wrapping and data encryption. These are the same battle-tested primitives used by banks, governments, and security-focused companies worldwide.

”But what if someone knows my phone number?”

This is the right question to ask. If the KEK is derived from your identity, couldn’t someone who knows your phone number or email derive the key and access your data?

No — and here’s why. Three layers of protection prevent this.

First, your identity is verified at the channel level. You can’t just tell ainywhere “I’m +1-555-123-4567” or “I’m alice@example.com” — your message has to arrive through a verified, authenticated webhook from the channel provider (Twilio, Mailgun, SendGrid, etc.). These providers cryptographically sign every incoming message using HMAC signatures, and our servers verify those signatures before processing anything. If the signature doesn’t match, the request is rejected. This means the only way to send a message as your phone number or email is to actually send it from that phone number or email, through the carrier or email provider.

Second, your identity alone isn’t enough to derive the key. The KEK derivation doesn’t just use your phone number or email — it mixes in a server-side secret (via HKDF) that never leaves our infrastructure. Think of it like a two-ingredient recipe: you need both the identity and the server secret to produce the correct KEK. An attacker who knows your phone number or email but doesn’t have the server secret can’t compute your KEK. And a database administrator who has the encrypted data but not the server secret can’t derive the KEK either. Both ingredients are required, and they live in completely separate systems.

Third, we don’t actually know your phone number or email. When your identity enters the system, it’s immediately converted into a one-way HMAC hash — and that hash is all we store. We use it to route your messages, but we can’t reverse it back to your actual phone number or email address. So even if someone had access to both our database and the server secret, they’d still need to know which identity to derive a key for — and that information simply doesn’t exist in our system in a readable form.

Multi-channel, one key

Because ainywhere works across multiple channels — email, SMS, WhatsApp, Slack, and more — you might have several identities linked to one account. Each identity gets its own KEK, but they all unwrap the same UMK. This means your data is consistently encrypted regardless of which channel you use, and adding a new channel doesn’t compromise existing encryption.

What your stored data looks like

When we store a conversation message in our database, anyone looking at it — including our own team, database administrators, or a hypothetical attacker — sees something like this:

eyJyb2xlIjoiYXNzaXN0YW50IiwiY29udGVudCI6IkkgZG9uJ3Qga25...

That’s AES-256-GCM ciphertext. Without your UMK, it’s meaningless noise. The same goes for your channel identities stored in our system — they’re HMAC-hashed so we can look them up for routing, but we can’t reverse them back to your actual phone number or email address.

Images and files are encrypted too

When ainywhere generates an image for you, we don’t just store the raw file. Every image is encrypted with its own random 256-bit key before being uploaded to storage. The decryption key is embedded in the URL we send you — which means the stored file on our servers is just encrypted noise, and viewing the image requires the specific URL we gave you.

If someone gained access to our storage bucket, they’d find thousands of .enc files — all unreadable without the individual keys that only exist in the URLs shared with you.

What we can’t protect against

We believe in transparency, so let’s be clear about the boundaries of this model.

During processing, your data is briefly in plaintext. When you send a message, the server decrypts it, passes it to the AI model, receives the response, encrypts everything, and discards the plaintext. This happens in memory during a single request — plaintext is never written to disk, and never logged.

This is the same trade-off that Proton makes with Lumo, and candidly, it’s the same trade-off any AI service that actually works has to make today. The difference is that most services don’t encrypt your data at all when they’re done processing it. We do.

Downstream AI providers handle your queries. When your message reaches the language model, it’s processed by a third-party AI provider. We select providers with strong privacy policies and data handling agreements, but we want you to know that your query does leave our infrastructure briefly during processing.

No security model is 100% bulletproof — and we make no such promise. What we can promise is a defense-in-depth approach: multiple independent layers of protection, each designed so that the effort required to bypass them far outweighs the value of any single piece of data. The goal is to make attacking your data simply not worth the effort.

What this means for you

In practice, Vault means:

  • A database breach doesn’t expose your data. Even if someone exfiltrated our entire database, all they’d get is encrypted blobs and hashed identifiers.
  • We can’t mine your conversations. We literally don’t have the ability to bulk-decrypt user messages. There’s no master key, no backdoor, no admin tool that reads your chats. Each user’s data requires their specific identity to unlock.
  • Your identity is protected. We don’t store your phone number or email in plaintext — we store HMAC hashes that let us route messages to you, but can’t be reversed to reveal who you are.
  • Deleting your account deletes your data. When you delete your account, the encrypted data becomes permanently unrecoverable because the keys are destroyed along with it.

Why this matters

Most AI companies collect your conversations, use them for training, and store them indefinitely in plaintext databases. A single breach — or a change in corporate policy, or a government subpoena — could expose everything you’ve ever said to an AI.

We think there’s a better way. Vault ensures that even in the worst-case scenario, your private conversations stay private. Not because we promise not to look, but because we genuinely can’t.

We’re not done building. As technologies like homomorphic encryption and confidential computing mature, we plan to push these boundaries further. But today, Vault represents what we believe is the strongest practical privacy model available for AI assistants.

Your AI. Anywhere. And always private.