In this article
Employee departure is a data security event
From my perspective, when an employee hands in their notice most IT teams already have a well-rehearsed checklist they follow almost on autopilot — disable email, revoke building access, collect the laptop, remove them from internal systems etc. This process has been refined over decades and most companies execute it quite reliably, which is a good thing.
However, there is a gaping hole in almost every company's offboarding process and that hole is Ai data. Over the past two years employees across every department have been using Ai tools to do their jobs, they have pasted client information into ChatGPT, they have asked Claude to analyse financial data, they have used Copilot to draft strategy documents and all of those conversations which contain the company's most sensitive information sit in accounts that the company does not own or control. When that employee walks out the door, so does the data.
The problem with personal Ai accounts
I believe the biggest challenge here is that the vast majority of Ai usage in workplace happens through personal accounts. An employee signs up for ChatGPT with their personal email, perhaps on the free tier or perhaps on a paid plan they expense, but either way the account belongs to them and not to the company. This is something that often gets overlooked in real world scenarios.
What this means on a practical basis is that the company has no visibility into what data has been shared with Ai, no ability to delete that data and no way to revoke access when the employee leaves. The conversations persist in that person's account indefinitely and they can access them from any device, at any time, from any future employer. Also, even if the employee has no malicious intent the data remains exposed — their ChatGPT history is essentially a searchable archive of the company's confidential information, sitting on OpenAI's servers, linked to an account you cannot touch.
What data is at risk
From my perspective, if you think about what a typical knowledge worker might have shared with Ai over six months of daily use the picture becomes quite concerning. They would have shared client information such as names, contact details, contract terms, project specifics and confidential instructions from clients who never consented to their data being shared with a third-party Ai service. Apart from this there is business strategy which includes competitive analysis, pricing strategies, product roadmaps and board-level discussions that were pasted into Ai for summarisation or refinement. Then you have financial data like revenue figures, cost structures, margin analysis and budget proposals used in Ai-assisted reporting. There are also trade secrets covering proprietary processes, technical specifications and intellectual property shared with Ai for documentation or problem-solving. Also, HR and personnel data including performance reviews, salary information and disciplinary matters used in Ai-assisted people management — all of this sitting in accounts the company cannot reach.
The uncomfortable truth: If you do not know what data your employees have shared with Ai, you cannot protect it when they leave. And right now most companies have zero visibility into this data, which I believe is a problem that will only grow with time.
The legal angle: GDPR and data controller responsibilities
I believe this is where things get specially serious from a regulatory standpoint. Under UK GDPR the company is the data controller for personal data processed in the course of business, which means you have a legal obligation to know where that data is, to protect it and to be able to delete it when required. When an employee pastes client personal data into a personal ChatGPT account, personal data has been transferred to a third-party processor without a proper data processing agreement, the data has likely been transferred outside the UK without adequate safeguards, you have lost the ability to fulfil deletion requests under Article 17 and you cannot demonstrate compliance with Article 5's storage limitation principle.
When the employee leaves your exposure does not end, it crystallises. You now have personal data sitting in a system you cannot access, controlled by a person who no longer works for you, stored by a company with no obligation to delete it at your request. From an ICO enforcement perspective the question is not whether data left the company but whether you had adequate measures in place to prevent it, which is a much harder question to answer when Ai tools are involved.
How other platforms handle offboarding
From my perspective it is worth looking at what actually happens when you try to offboard an employee from the major Ai platforms, because the reality is not very reassuring.
ChatGPT (OpenAI)
If the employee used a personal account you have no offboarding capability at all — you cannot see their conversations, delete their data or revoke their access. ChatGPT Team and Enterprise plans offer admin controls however they do not address the data that has already been stored in OpenAI's systems. Deleting a workspace does not guarantee deletion from training data pipelines, which is something that concerns me on a fundamental basis.
Microsoft Copilot
Copilot integrates with Microsoft 365 so deactivating the user's M365 account does remove their access, which is a step in the right direction. However, Copilot's data handling is tied to Microsoft's broader data retention policies and the specific handling of Ai conversation data during offboarding remains opaque. There is no cryptographic guarantee that the data is unrecoverable and that I believe is the key gap.
Claude (Anthropic)
Similar to ChatGPT, personal Claude accounts belong to the individual. Team plans offer some admin controls however the fundamental problem remains — once data has been processed by the model, deleting the conversation does not guarantee the data is truly gone from all systems.
The pattern: Every major Ai platform treats deletion as an access control problem, which means they simply remove the user's ability to see the data. None of them treat it as a cryptographic problem where the data is made mathematically unrecoverable, and I believe that distinction is everything.
Cryptographic key revocation: the kill switch
This is where the approach we built at Other Me is fundamentally different and I am genuinely proud of what we have achieved here. Our patent-pending SCRS (Secure Context Retrieval System) does not just delete data when an employee leaves, it makes the data mathematically unrecoverable. I believe this is the only approach that can prove to be fruitful in real world enterprise environments where data flows through complex pipelines.
Here is how it works on a practical basis. When data flows through Other Me, sensitive information is encrypted with keys that are unique to each user and managed by the company. The actual data stored in Ai conversation logs is encrypted ciphertext and without the corresponding decryption key it is meaningless noise. When an employee leaves the administrator revokes their cryptographic keys, which is not the same as deleting files or removing access permissions — it is a mathematical guarantee. Once the keys are revoked the encrypted data remains in storage but cannot be decrypted by anyone, not the former employee, not the company, not us, not the Ai model provider. Even if someone gained access to the raw stored data they would have ciphertext without keys which is computationally impossible to reverse. The revocation is instant and irreversible with no grace period and no recovery mechanism, and it applies retroactively to all historical conversations not just future ones.
I believe this distinction matters deeply — this is not data deletion, it is something stronger. Deletion relies on every system in a complex pipeline actually removing the data which in practice is very hard to guarantee. Key revocation relies on mathematics and the data becomes permanently inaccessible regardless of where copies might exist, which gives a level of certainty that deletion alone simply cannot provide.
Building an Ai offboarding policy
From my perspective, whether you use Other Me or not every company needs an Ai offboarding process and the sooner you start building one the better positioned you will be. I believe the knowledge absorption around this topic is still quite low in most companies, which means there is an opportunity to get ahead of the curve and induce confidence in your clients and stakeholders that their data is being handled responsibly.
Audit current Ai usage
Before you can offboard Ai data you need to know what exists. Survey your teams about which Ai tools they use, what types of data they share and whether they use personal or corporate accounts. The results will likely be sobering however they are essential and this audit on its own will have significant recall value when you look back at your security journey.
Migrate to corporate-controlled accounts
I believe moving all Ai usage from personal accounts to corporate-managed platforms where the IT team has administrative control is the single most impactful step you can take. Personal Ai accounts are a data governance blind spot that no policy can adequately address, and getting everyone onto managed accounts should be a priority focused towards reducing your exposure as quickly as possible.
Add Ai to your offboarding checklist
Update your HR and IT offboarding procedures to include Ai-specific steps — revoke Ai platform access, review and archive relevant Ai conversation data, revoke cryptographic keys if using a platform that supports this and document what data the departing employee had access to. Apart from this make sure the process is documented clearly so it can be followed consistently regardless of who is executing it.
Implement data classification for Ai
Not all data carries the same risk and I believe creating clear guidelines about what categories of information can and cannot be used with Ai tools will prove to be fruitful in the long run. Client personal data, financial records and trade secrets should never flow through Ai systems that do not offer cryptographic data protection, and having this classification in place will induce confidence across the company that sensitive information is being treated with the care it deserves.
Choose platforms with genuine offboarding capabilities
When evaluating Ai platforms I believe you should ask specifically about offboarding — not just user deactivation but actual data protection after departure. Can you revoke access retroactively? Is the data cryptographically protected? Can you prove to regulators that the data is unrecoverable? If the vendor cannot answer these questions clearly on a practical basis the platform is simply not ready for enterprise use.
The test: Ask your Ai vendor what happens to conversation data when you offboard a user. If the answer involves anything less than cryptographic key revocation your data is still exposed after they leave, and I believe that should be a dealbreaker for any company that takes data security seriously.
The bottom line
From my perspective, every employee who leaves the company takes knowledge with them and that has always been true. However, Ai has created a new category of risk which is specially concerning — structured, searchable archives of your most sensitive data sitting in systems you do not control, accessible to people who no longer work for you. I believe this is not a hypothetical problem, it is happening right now in companies of every size.
The solution is not to ban Ai, I believe that would be counterproductive and impractical. The solution is to use Ai platforms that treat employee departure as the data security event it is. Cryptographic key revocation is the only approach that provides a mathematical guarantee of data protection, not just access removal but genuine unrecoverability. Your offboarding process was built for an era before Ai and it is time to update it.