The EU AI Act is the world's first comprehensive AI regulation, and its extraterritorial reach makes it directly relevant to organizations operating in MENA markets. Here is what cross-border exposure looks like and how to think about it.
The EU AI Act entered into force in August 2024, and its full compliance timeline is phasing in through 2027. For organizations based in Europe, the path forward is increasingly well-mapped. For organizations operating across the EU–MENA corridor — or for MENA-based entities whose AI systems are deployed or used by people in Europe — the picture is more complex.
Extraterritorial Reach Is Not Theoretical
Like the GDPR before it, the EU AI Act applies beyond EU borders. The key trigger is not where a company is incorporated; it is where the AI system's outputs are used. A financial institution headquartered in Riyadh that deploys a credit-scoring model affecting EU-based customers is in scope. A Gulf technology company whose AI product is licensed to a European bank is in scope. An international organization deploying AI systems in Europe is in scope.
For MENA organizations with European operations, partnerships, or clients, this is not a distant compliance horizon — it is a current operational reality.
Risk Tiering and What It Means in Practice
The EU AI Act organizes AI systems into four risk tiers: prohibited practices, high-risk systems, limited-risk systems, and minimal-risk systems. The compliance burden scales with risk classification.
Prohibited practices include social scoring by public authorities, real-time biometric surveillance in public spaces, and systems that exploit vulnerable groups. These prohibitions applied from February 2025.
High-risk systems are the core of the Act's compliance regime. They include AI used in employment and workforce management, access to education, critical infrastructure management, law enforcement, border control, administration of justice, and several categories of financial products and services. High-risk obligations include: conformity assessments, technical documentation, logging and audit trails, human oversight mechanisms, and registration in the EU AI Act database.
For regulated industries — financial services, healthcare, insurance, infrastructure — the probability of operating at least one high-risk AI system is significant.
Limited-risk systems, such as chatbots and synthetic media, carry transparency obligations: users must know they are interacting with an AI system.
The Cross-Border Compliance Challenge
Organizations operating in both MENA and EU markets face a layered challenge. They must comply with EU AI Act obligations for EU-facing systems, while also navigating domestic AI governance frameworks that are at different stages of development.
Saudi Arabia's SDAIA has published national AI ethics principles and sector-specific guidance. The UAE has AI ethics guidelines and is developing sector regulations. Morocco operates under Law 09-08 for data protection. None of these frameworks are fully harmonized with the EU AI Act — but there are meaningful overlaps around fairness, transparency, and human oversight.
The practical question for cross-border organizations is not whether to comply with each regime independently, but how to design governance structures that satisfy multiple frameworks simultaneously — without building parallel systems for every jurisdiction.
What Organizations Should Be Doing Now
The highest-value actions for cross-border organizations at this stage are:
AI system inventory. Organizations frequently underestimate how many AI-powered systems they operate. A defensible inventory, with classification against the EU AI Act's risk tiers, is the foundation of any compliance program.
Governance structure assessment. High-risk AI obligations require human oversight mechanisms, logging, and clear accountability chains. Many organizations do not yet have governance structures adequate to these requirements.
Contract and procurement review. EU AI Act obligations flow up and down supply chains. Providers of AI systems to EU-based customers may carry provider obligations. Organizations procuring AI from third parties need to understand where deployer obligations apply.
Regulatory horizon monitoring. The MENA regulatory landscape is moving quickly. Saudi Arabia, the UAE, and increasingly Morocco are building domestic AI governance frameworks. Organizations that align their internal governance with EU AI Act requirements are in a strong position to adapt as domestic frameworks develop — because the substantive requirements are increasingly convergent.
The Strategic Opportunity
For well-prepared organizations, EU AI Act compliance is not only a cost — it is a competitive signal. Particularly in financial services, healthcare, and enterprise technology, the ability to demonstrate rigorous AI governance is increasingly a procurement and partnership criterion.
Cross-border organizations that invest in robust governance frameworks now are building infrastructure that will serve them across multiple regulatory jurisdictions and multiple regulatory cycles.
Rabii Agoujgal is an AI governance professional based in Casablanca, Morocco, specializing in the MENA region and the EU–MENA regulatory corridor. He works with regulated enterprises, international development organizations, and government clients on AI governance strategy, compliance readiness, and policy advisory. He engages in Arabic, French, and English.