All insights
4 min read

What the EU AI Act Means for Organizations Operating Across Europe and MENA

The EU AI Act is the world's first comprehensive AI regulation, and its extraterritorial reach makes it directly relevant to organizations operating in MENA markets. Here is what cross-border exposure looks like and how to think about it.

In the first quarter of 2026, Gulf financial institutions began receiving substantive compliance questions from European counterparties about their AI systems. Not from regulators -- from procurement and risk teams on the other side of transactions. The EU AI Act's extraterritorial reach is no longer theoretical. It is a due diligence item.

Extraterritorial Reach Is Not Theoretical

Like the GDPR before it, the EU AI Act applies beyond EU borders. The key trigger is not where a company is incorporated; it is where the AI system's outputs are used. A financial institution headquartered in Riyadh that deploys a credit-scoring model affecting EU-based customers is in scope. A Gulf technology company whose AI product is licensed to a European bank is in scope. An international organization deploying AI systems in Europe is in scope.

For MENA organizations with European operations, partnerships, or clients, this is not a distant compliance horizon -- it is a current operational reality.

Risk Tiering and What It Means in Practice

The EU AI Act organizes AI systems into four risk tiers: prohibited practices, high-risk systems, limited-risk systems, and minimal-risk systems. The compliance burden scales with risk classification.

Prohibited practices -- including social scoring by public authorities, real-time biometric surveillance in public spaces, and systems that exploit vulnerable groups -- have applied since February 2025. High-risk systems are the core of the Act's compliance regime: AI used in employment, education, critical infrastructure, law enforcement, border control, administration of justice, and several categories of financial products and services. Their obligations include conformity assessments, technical documentation, logging and audit trails, human oversight mechanisms, and registration in the EU AI Act database. For regulated industries -- financial services, healthcare, insurance, infrastructure -- the probability of operating at least one high-risk AI system is significant. Limited-risk systems, such as chatbots and synthetic media, carry transparency obligations: users must know they are interacting with AI.

The Cross-Border Compliance Challenge

Organizations operating in both MENA and EU markets face a layered challenge. They must comply with EU AI Act obligations for EU-facing systems, while also navigating domestic AI governance frameworks that are at different stages of development.

Saudi Arabia's SDAIA has published national AI ethics principles and sector-specific guidance. The UAE has AI ethics guidelines and is developing sector regulations. Morocco operates under Law 09-08 for data protection. None of these frameworks are fully harmonized with the EU AI Act -- but there are meaningful overlaps around fairness, transparency, and human oversight.

The practical question for cross-border organizations is not whether to comply with each regime independently, but how to design governance structures that satisfy multiple frameworks simultaneously -- without building parallel systems for every jurisdiction.

What Organizations Should Be Doing Now

The highest-value actions for cross-border organizations at this stage fall into four areas. The first is an AI system inventory: organizations frequently underestimate how many AI-powered systems they operate, and a defensible inventory classified against the EU AI Act's risk tiers is the foundation of any compliance program. The second is a governance structure assessment -- high-risk AI obligations require human oversight mechanisms, logging, and clear accountability chains that most organizations do not yet have in place. The third is contract and procurement review: EU AI Act obligations flow up and down supply chains, and organizations procuring AI from third parties need to understand where deployer obligations apply. The fourth is regulatory horizon monitoring. The MENA landscape is moving quickly, and organizations that align their internal governance with EU AI Act requirements now are well-positioned to adapt as domestic frameworks in Saudi Arabia, the UAE, and Morocco continue to develop.

The Strategic Opportunity

For well-prepared organizations, EU AI Act compliance is not only a cost -- it is a competitive signal. Particularly in financial services, healthcare, and enterprise technology, the ability to demonstrate rigorous AI governance is increasingly a procurement and partnership criterion.

Cross-border organizations that invest in robust governance frameworks now are building infrastructure that will serve them across multiple regulatory jurisdictions and multiple regulatory cycles.


Rabii Agoujgal is an AI governance professional based in Casablanca, Morocco, specializing in the MENA region and the EU–MENA regulatory corridor. He works with regulated enterprises, international development organizations, and government clients on AI governance strategy, compliance readiness, and policy advisory. He engages in Arabic and English.

All insights

For consulting inquiries

Get in touch