All insights
5 min read

What the CBUAE's New AI Guidance Actually Says About Your Vendors

The CBUAE's 2026 AI guidance makes licensed financial institutions accountable for every AI system they deploy -- including what their vendors built.

In February 2026, the Central Bank of the UAE (CBUAE) published its Guidance Note on Consumer Protection and Responsible Adoption and Use of Artificial Intelligence and Machine Learning by Licensed Financial Institutions. The document is non-binding. The expectation it creates is not.

One sentence in particular deserves attention from every compliance officer and procurement team in the Gulf: licensed financial institutions remain fully accountable for AI outcomes, including where AI systems or services are provided by third parties.

Read that again. The CBUAE is not distinguishing between AI you built and AI you bought. If a vendor's model produces a discriminatory credit decision, an opaque fraud flag, or a compliance failure, the accountability sits with the institution that deployed it. The vendor contract does not transfer the regulatory exposure.

This is not a new principle in financial regulation. Outsourcing risk has always worked this way. What is new is that it now applies explicitly to AI systems, at a moment when Gulf banks are signing contracts for AI-embedded software at a pace that their governance frameworks have not kept up with.

What the Guidance Actually Requires

The CBUAE guidance establishes five principles for licensed financial institutions deploying AI: governance and accountability, fairness and non-discrimination, transparency and explainability, effective human oversight, and data management and privacy. Each of these has direct implications for how institutions evaluate and contract with AI vendors.

On governance, the guidance requires documented AI governance frameworks proportionate to the institution's size and complexity, with clear roles assigned across risk, compliance, internal audit, and technology functions. This means knowing what every material AI system in your stack does, who is responsible for it, and how decisions made by that system can be explained and reviewed.

On outsourcing, the guidance is specific: contracts with third-party AI providers must include audit rights, cybersecurity guarantees, and the operational capability to cease using a vendor's system immediately if governance requirements are breached. Most vendor contracts currently in use across the region do not include these provisions. Most procurement processes did not require them.

On transparency, financial institutions must be able to explain AI-driven decisions to consumers and provide a human review mechanism for high-impact decisions, defined as any AI determination that materially affects a customer's access to financial products or services. Credit approvals, insurance pricing, fraud flags, and AML decisions all qualify. If the model making those decisions is a third-party black box, explaining its outputs is not straightforward.

The Procurement Gap

The problem is structural, and it predates the CBUAE guidance. Gulf banks have been buying AI-embedded software the same way they buy other enterprise software: evaluating functionality, negotiating price, reviewing standard contract terms, and signing. The governance layer gets bolted on afterward, if at all.

That process worked tolerably well when software was deterministic. You could test it, audit it, and hold the vendor accountable through contract terms that mapped to predictable failure modes. AI systems do not work that way. A model's behavior depends on its training data, its fine-tuning, its deployment context, and the feedback loops created by real-world use. None of that is visible in a vendor demo or a standard contract.

What procurement teams at Gulf financial institutions are often buying without knowing it: exposure to training data they have not reviewed, model behaviors they cannot audit, regulatory obligations they cannot meet without vendor cooperation, and contractual structures that do not give them the rights the CBUAE now expects them to have.

The EU AI Act, which applies to any AI system deployed in ways that affect EU residents or EU-exposed operations, creates a parallel set of obligations for institutions with European business. Under the Act, deployers of high-risk AI systems -- which includes credit scoring, fraud detection, and AML tools -- carry obligations around documentation, human oversight, and incident reporting that flow directly from the vendor relationship. A MENA bank with DIFC or ADGM operations, EU correspondent banking relationships, or European clients is not operating outside that framework's reach.

What Good Vendor Due Diligence Looks Like

Before a contract is signed, a financial institution should be able to answer a specific set of questions about any AI system it intends to deploy in a material function.

What data was the model trained on, and how was that data governed? This matters for bias risk, data privacy compliance, and the institution's ability to explain model behavior to a regulator.

What is the vendor's own AI governance framework, and can they demonstrate it? A vendor that cannot produce documented governance, testing methodology, and incident response procedures is not a vendor that can meet the CBUAE's outsourcing expectations.

What audit rights does the contract provide, and are they exercisable in practice? A contractual right to audit that requires 90 days notice and vendor cooperation is not the same as an operational capability to review what the model is doing.

Can the institution cease using the system immediately if required? The CBUAE guidance is explicit on this point. If the answer depends on a vendor's cooperation or a contract negotiation, the institution does not have the capability the guidance requires.

How does the system handle high-impact decisions, and what is the human review mechanism? For credit, insurance, and AML applications, this is not optional. It needs to be designed into the deployment, not added after an examiner asks about it.

These questions are not difficult to ask. They are rarely asked systematically before contracts are signed.

The Practical Implication

The CBUAE guidance does not set a compliance deadline. It establishes a supervisory trajectory, and the direction is clear. Institutions that treat AI governance as an innovation question are going to find themselves answering it as a compliance question, on a regulator's timeline rather than their own.

For procurement teams, the immediate implication is that vendor evaluation needs a governance layer it does not currently have. For compliance and risk teams, it means existing AI model inventories need to include third-party systems, not just internally built tools. For legal teams, it means standard vendor contracts need to be reviewed against what the guidance actually requires.

None of this requires waiting for the guidance to become binding. The supervisory expectation is already set.


If you are evaluating AI vendor contracts against the CBUAE guidance or EU AI Act obligations in a MENA financial institution context, I am happy to discuss what a structured review looks like in practice.


Rabii Agoujgal is an AI governance professional based in Casablanca, Morocco, specializing in the MENA region and the EU--MENA regulatory corridor. He works with regulated enterprises, international development organizations, and government clients on AI governance strategy, compliance readiness, and policy advisory. He engages in Arabic and English.

All insights

For consulting inquiries

Get in touch