Every document your organisation uploads to an AI platform probably contains personal data. Names, email addresses, phone numbers, financial figures, national ID numbers — it is woven into contracts, invoices, reports, and correspondence.
Most AI tools treat all of this the same. They index it, search it, and include it in AI-generated responses. The personal data sits right alongside everything else, visible to the AI at all times.
Other Me takes a fundamentally different approach. Personal data is detected, separated, and stored in its own encrypted vault — away from the AI's searchable index. The AI never sees it unless an authorised user specifically requests it, and every release is logged.
Here is how it works, step by step.
What Counts as PII?
PII stands for Personally Identifiable Information. It is any data that can identify a specific person, either on its own or when combined with other information. Other Me's detection system looks for the following types:
- Names — full names, first names, surnames in context
- Email addresses — any email format
- Phone numbers — UK, international, and mobile formats
- Financial data — bank account numbers, sort codes, payment references
- National ID numbers — National Insurance numbers, passport numbers
- Postal addresses — full or partial physical addresses
- Dates of birth — when associated with identifiable individuals
The system is designed to err on the side of caution. If something looks like PII, it is treated as PII.
A Walkthrough: What Happens When You Upload a Document
Let's follow a simple example. You upload a client contract to Other Me. Here is exactly what happens.
Step 1: Document Ingestion
The document is received and parsed. Other Me extracts the text content, maintaining the structure of the original document — paragraphs, headings, tables, and so on.
Step 2: PII Detection
The extracted text is scanned for personal data. In our example contract, the system finds:
- Two client names (e.g., "Sarah Mitchell" and "James Cooper")
- Two email addresses
- One phone number
- One National Insurance number
- A bank sort code and account number
- A residential address
Step 3: PII Vaulting
Each piece of detected PII is removed from the document's content and placed into a separate encrypted vault. This vault is not part of the AI's search index. It is a distinct, isolated store with its own encryption and access controls.
In the vault, each PII item is linked back to its source document and its exact position within that document. This link is what allows controlled release later — the system knows where each piece of data came from and where it belongs.
Key point: The PII vault and the AI search index are separate systems. A breach of the search index does not expose personal data. A breach of the vault does not expose your documents. Both would need to be compromised simultaneously to reconstruct the original.
Step 4: Redaction and Indexing
With PII removed, the document is now a redacted version of the original. Where "Sarah Mitchell" appeared, the text now reads something like "[PERSON_1]". Where the National Insurance number appeared, it reads "[NATIONAL_ID_1]".
This redacted version is what gets indexed for AI search. When someone asks the AI a question and it retrieves this contract, it works with the redacted version. It can still understand the document's meaning, its legal terms, its structure and intent — but it does not have access to the personal data.
The AI might tell you: "The contract between [PERSON_1] and [PERSON_2] includes a non-compete clause lasting 12 months." It gives you the useful information without exposing who the people are.
Step 5: Controlled Release (When Needed)
Sometimes you need the actual names, the real email addresses, the specific financial details. Other Me handles this through controlled release.
When an authorised user requests the full document or explicitly asks for the personal data, the system:
- Checks authorisation — Does this user have the right to see PII for this document? This is verified through the SCRS Dual-Gate architecture. Gate 1 (Block Before Search) confirmed the user could access the document. Now the PII vault performs its own authorisation check.
- Retrieves from vault — The specific PII items are fetched from the encrypted vault and decrypted.
- Rehydrates the content — The placeholder tokens ([PERSON_1], [NATIONAL_ID_1], etc.) are replaced with the real data, rebuilding the complete document.
- Logs the release — Every piece of PII released is recorded in the tamper-evident audit trail. Who requested it, when, which document, which specific data items. This log cannot be altered after the fact.
The result: you see the complete contract with all personal data restored. But this only happened because you were authorised, you explicitly requested it, and the entire process was logged.
Why This Matters
You might be wondering: why go through all this effort? Why not just use access controls to restrict who can see documents?
There are three important reasons.
Reason 1: AI Does Not Need PII to Be Useful
Think about most of the questions you ask an AI assistant. You want to understand contract terms, find precedents, summarise key points, or compare documents. None of this requires knowing the specific people involved. The AI works just as well with [PERSON_1] as it does with "Sarah Mitchell."
By removing PII from the AI's view, you get the same quality of AI assistance with dramatically less data exposure.
Reason 2: Breach Impact Is Drastically Reduced
If a security incident affects the AI search layer, the attacker finds redacted documents. No names. No email addresses. No financial data. No national ID numbers. The personal data is in a completely separate system.
This is not just better security — it changes your regulatory reporting obligations. A breach of redacted business documents is very different from a breach of documents containing thousands of people's personal details.
Separating PII from searchable content means a breach of one system does not become a breach of everything.
Reason 3: Compliance Becomes Simpler
UK data protection law (UK GDPR) requires organisations to demonstrate appropriate technical measures for protecting personal data. PII vaulting is one of the strongest technical measures available. It goes beyond encryption at rest — it physically separates personal data from the systems that process general business content.
When a regulator asks how you protect personal data in your AI systems, "we remove it before the AI ever sees it" is a compelling answer.
The Bigger Picture: SCRS and PII
PII vaulting does not work in isolation. It is part of Other Me's patent-pending SCRS (Secure Context Retrieval System) and its Dual-Gate architecture.
- Gate 1 — Block Before Search: Ensures the AI only searches data the user is authorised to access. PII vaulting means that even within authorised data, personal information is not visible to the AI during search.
- Gate 2 — Verify Before Showing: Cryptographically verifies every piece of data before it reaches the user. PII rehydration adds an additional authorisation check on top of this.
Together, these layers mean that personal data is protected at every stage: before search, during retrieval, and at the point of display.
What About Data Subject Requests?
Under UK GDPR, individuals have the right to ask what personal data you hold about them and to request its deletion. Because Other Me vaults PII in a structured, linked format, responding to these requests is straightforward.
Need to find all data relating to a specific person? The vault is indexed by PII type and value. Need to delete someone's personal data? Remove it from the vault. The redacted documents in the search index remain functional — they just permanently show [PERSON_1] instead of the deleted name.
This is significantly easier than trying to find and remove personal data scattered across thousands of indexed documents in a traditional AI system.
Getting Started
PII vaulting is built into every Other Me account. There is nothing extra to configure or enable. Upload your documents, and the system handles detection, vaulting, redaction, and controlled release automatically.
Other Me is available now. Pro accounts are £24/month. Member accounts are £15/month each. Enterprise pricing is available for organisations needing custom deployment. View pricing
Your data deserves more than access controls and good intentions. It deserves a system where personal information is separated, encrypted, and only released when it should be.
Pop Hasta Labs Ltd is registered at UK Companies House (No. 16742039). SCRS is protected under UK Patent Application No. 2602911.6.