Enterprise Governance · · 10 min read

AI in Financial Services: Meeting FCA Expectations

AS

Founder & CEO, Pop Hasta Labs

UK financial services firms are under growing pressure to adopt AI. The productivity gains are real. Competitors are moving fast. But for banks, asset managers, insurance firms, and non-bank financial companies, adoption comes with a set of regulatory expectations that consumer-grade AI tools simply cannot meet.

The Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA) have not banned AI. They have made it clear that firms must adopt it responsibly — with appropriate governance, risk management, and oversight. This article explains what that means in practice and how governed AI platforms can address these requirements structurally.

The Regulatory Landscape

UK financial services regulation does not yet include AI-specific legislation in the way the EU AI Act creates binding rules. Instead, the FCA and PRA expect firms to apply existing regulatory principles to AI adoption. These include:

  • Senior Managers and Certification Regime (SM&CR): Individual accountability for AI-related decisions and outcomes.
  • Operational resilience requirements: AI systems must not create single points of failure or undermine important business services.
  • UK GDPR: Data minimisation, purpose limitation, and individual rights apply to all AI processing of personal data.
  • Consumer Duty: AI-assisted decisions must deliver good outcomes for retail customers.
  • Third-party risk management: Use of external AI providers creates outsourcing obligations.

The FCA's position: Firms are expected to understand and manage the risks of AI, not avoid it. The regulator's concern is ungoverned adoption, not adoption itself.

Operational Resilience

Since March 2022, the FCA and PRA have required firms to identify their important business services and set impact tolerances for disruption. AI introduces new considerations here.

If your firm uses AI for client-facing processes — document review, risk assessment, customer communication — that AI capability may sit within an important business service. This means you need to consider:

  • What happens if the AI provider goes down? Do you have fallback procedures? Can staff complete the work manually?
  • What happens if the AI produces incorrect outputs? Are there human review steps? Can errors be caught before reaching clients?
  • Can you switch providers if needed? Vendor lock-in with a single AI model creates concentration risk.

A BYOK (Bring Your Own Keys) architecture directly addresses this. When you hold your own API keys for multiple AI providers, switching from one to another takes minutes, not months. There is no vendor lock-in at the model layer.

Data Minimisation Under UK GDPR

Article 5(1)(c) of UK GDPR requires that personal data be adequate, relevant, and limited to what is necessary. This creates a specific challenge for AI systems that use Retrieval Augmented Generation (RAG) — the technique of pulling relevant documents into an AI prompt to generate informed answers.

Art. 5(1)(c) UK GDPR data minimisation principle — applies to all AI processing of personal data

Most RAG systems retrieve first and filter second. The AI searches across all available data, pulls back everything that might be relevant, and then filters the results before showing them to the user. The problem: the AI has already processed data that the user may not be authorised to access. This conflicts with the data minimisation principle.

Pre-retrieval enforcement solves this structurally. With a system like Other Me's patent-pending SCRS (Secure Context Retrieval System, UK Patent Application No. 2602911.6), the AI is prevented from searching data that the user should not access. The Dual-Gate architecture — Gate 1 blocks before search, Gate 2 verifies before showing — means only authorised data is ever processed.

Model Risk Management

The PRA's expectations around model risk (SS1/23) apply to AI models used in regulated activities. Key requirements include:

  • Model inventory: Firms must know which AI models are in use across the organisation.
  • Validation: Models used for regulated decisions must be tested and validated before deployment.
  • Ongoing monitoring: Model performance must be tracked over time. Drift, bias, and accuracy degradation need to be identified.
  • Documentation: The purpose, limitations, and assumptions of each model must be recorded.

Shadow AI — where employees use unapproved AI tools — makes model risk management impossible. You cannot manage what you cannot see. A governed AI platform that centralises all AI usage gives you the visibility required to maintain a model inventory and monitor usage patterns.

Third-Party Risk: External AI Providers

When your firm uses an external AI provider — OpenAI, Anthropic, Google, or others — that relationship creates third-party risk obligations under both FCA and PRA rules.

Key considerations:

  • Data processing agreements: You need a clear contractual basis for each AI provider that processes your data.
  • Sub-outsourcing: Does the AI provider use sub-processors? Where is the data processed geographically?
  • Exit planning: Can you move to a different provider without losing access to your data or disrupting business services?
  • Concentration risk: Over-reliance on a single AI provider creates systemic risk.

BYOK simplifies this significantly. Because you hold the direct contractual relationship with each AI provider, your data processing agreements are between you and the provider — not between you, an intermediary platform, and the provider. You control the relationship, and you can exit at any time.

Practical tip: Maintain BYOK connections with at least two AI providers. This reduces concentration risk and gives you a ready fallback if one provider experiences issues.

Audit Trail Requirements

For regulated activities, firms must be able to demonstrate to the FCA and PRA exactly what happened, when, and why. This applies to AI-assisted decisions just as it does to human decisions.

An adequate AI audit trail should capture:

  • Who made the AI query (user identity and role)
  • What data was accessed (which documents or data sources were used)
  • What the AI produced (the full response)
  • When the interaction occurred (timestamp)
  • Which model was used (provider and model version)
  • What access controls were in effect (what the user was and was not permitted to see)

Most consumer AI tools provide none of this. Enterprise AI platforms vary widely in the depth of their audit logging. For financial services firms, the audit trail is not optional — it is a regulatory requirement.

100% of AI interactions should be logged with full audit trails for regulated financial activities

Consumer Duty and AI-Assisted Decisions

The FCA's Consumer Duty, which came into force in July 2023, requires firms to deliver good outcomes for retail customers across four areas: products and services, price and value, consumer understanding, and consumer support.

When AI assists in decisions that affect retail customers — such as product recommendations, risk assessments, or complaint handling — the firm must be able to show that:

  • The AI output was accurate and appropriate for the customer's situation.
  • The customer was not disadvantaged by the use of AI compared to human decision-making.
  • Any AI-generated communications were clear, fair, and not misleading.
  • There is a human review process for significant AI-assisted decisions.

This is not about avoiding AI in customer-facing processes. It is about ensuring that AI improves outcomes rather than creating new risks. Governed AI with full audit trails and access controls supports this by making every AI-assisted decision traceable and reviewable.

A Structural Approach to Compliance

The common thread across all these regulatory requirements is governance by design — building compliance into the structure of how AI is used, rather than bolting it on afterwards.

A governed AI platform that addresses FCA and PRA expectations should provide:

  • Pre-retrieval enforcement: Data minimisation is built into the architecture, not applied as an afterthought. SCRS ensures users can only access data they are authorised to see.
  • Complete audit trails: Every interaction is logged with user identity, data accessed, model used, and full response — ready for regulatory review.
  • BYOK for AI models: Direct relationships with AI providers, no vendor lock-in, clear exit planning, and reduced concentration risk.
  • BYOK for encryption: Customer-managed encryption keys ensure that data at rest is protected under your control.
  • Role-based access controls: Different teams and individuals see different data, enforced before the AI even begins to search.

The FCA is not asking firms to avoid AI. It is asking them to adopt it in a way that is consistent with existing regulatory obligations. The firms that get this right will gain competitive advantage. The firms that adopt AI without governance will create regulatory exposure.

Other Me is built for exactly this environment. As a governed AI platform from Pop Hasta Labs Ltd (UK Companies House 16742039), it combines patent-pending SCRS security (UK Patent Application No. 2602911.6), BYOK architecture, and comprehensive audit trails into a platform designed for regulated industries.

Pricing starts at £15 per user per month for Member plans and £24 per month for Pro plans, with custom Enterprise pricing available for larger deployments.

The path forward for UK financial services is clear: adopt AI, but adopt it with the governance that your regulators expect and your clients deserve.

AS

Abhishek Sharma

Founder & CEO of Pop Hasta Labs. Building Other Me — the governed AI platform with patent-pending security architecture. Based in London.

Try Other Me free for 7 days

AI assistants with governance built-in. No credit card required.

Start 7-day free trial