The March 2026 drone strikes on AWS data centers in the UAE and Bahrain exposed a gap no Gulf AI governance framework was designed to address: what happens to AI-dependent regulated functions when the infrastructure layer becomes a military target.
On March 1, 2026, Iranian drones struck two Amazon Web Services data centers in the UAE. A third was damaged in Bahrain. Banking applications went offline. Payment platforms froze. Cloud services across the Gulf remained partially unavailable for weeks.
Foreign Policy framed it as "the first war against AI" -- the first sustained military campaign to target AI compute infrastructure as a primary strategic objective. That framing will be debated. What is harder to contest is what the strikes revealed about the AI governance programs Gulf enterprises spent years building: they were not designed for this.
A Design Assumption Nobody Wrote Down
AI governance frameworks are built on a foundation they rarely acknowledge explicitly: the assumption that the infrastructure layer is stable.
The UAE's AI regulatory guidance covers high-risk applications, data protection obligations under the UAE Personal Data Protection Law (PDPL), which came into full enforcement on January 1, 2026, and sector-specific requirements from the Central Bank and health authorities. Saudi Arabia's National Data Management Office (NDMO), operating under the Saudi Data and AI Authority (SDAIA), has issued governance principles and is stepping up enforcement activity in areas including AI-driven analytics. The Big Four consultancies circulate frameworks for agentic AI governance, responsible deployment, and ethics-by-design. PwC Middle East's most recent substantive guidance, published in February 2026, addresses controls throughout the agentic AI lifecycle. It has not been updated in light of the conflict.
None of these frameworks address what happens when the infrastructure those systems depend on becomes a military target.
This is not a criticism of the frameworks. It is a description of what they were designed to do. Business continuity planning for AI systems has historically meant cloud failover, vendor redundancy, and recovery time objectives measured in hours. It did not mean a sustained physical attack on data centers serving an entire region's financial infrastructure, with service disruption measured in weeks.
The strikes have exposed a gap that was not modeled by any governance framework currently in circulation. The question for organizations operating in the Gulf is what that gap means for them specifically.
Three Questions the Strikes Are Now Asking
The first is continuity. When an AI system that an organization relies on for regulated functions -- credit decisions, transaction monitoring, regulatory reporting -- becomes unavailable due to external military action, what are that organization's governance obligations?
The question breaks into several more specific ones. Which decision processes revert to human judgment, and under what documented protocols? Who holds accountability for AI-assisted decisions made in the hours before the outage? What governance obligations attach to manual decisions made in the days after, when the system comes back online and discrepancies between AI and human outputs become visible? Most organizations have business continuity plans. Very few have mapped those plans to their AI governance programs at the level of individual systems and decision types. The gap between the two is now a compliance question, not just an operational one.
The second is the sovereign AI question, framed honestly. Before March 1, sovereign AI was primarily a vendor and consultancy positioning argument. The case for keeping AI compute within national boundaries rested on geopolitical preference, data sovereignty principles, and commercial interest in roughly equal measure. Deloitte, PwC, and national compute infrastructure providers made the argument in different ways and for overlapping reasons. It was credible in principle and abstract in practice.
It is no longer abstract. The Iranian strikes gave the sovereign AI thesis an empirical grounding it did not have before. Gulf governments are expected to accelerate sovereign AI commitments as a direct policy response to the conflict, and that recalibration is legitimate. But there is an important distinction to maintain: between the vendor-driven narrative that was circulating before the first drone flew, and the post-conflict policy reckoning that the strikes produced. They are not the same argument, and they require different governance responses. The first was a commercial and political preference. The second is a security and resilience imperative. Conflating them will produce frameworks designed to sell national compute infrastructure rather than to govern AI systems under conditions of physical disruption.
The third question is the most specific, and the one no published commentary has yet raised. In January 2026, the UAE formally integrated a National AI System as an advisory member of its Cabinet. The Library of Congress Global Legal Monitor documented this on March 25, 2026, alongside separate measures regulating AI use in elections and executive decision-making. The system advises on government decisions at the highest level.
What happens to the governance chain when that system's compute infrastructure is disrupted? What disclosure obligations, if any, attach to decisions the Cabinet made with AI assistance during a period when that system's availability was compromised? What is the accountability framework for AI-informed government decisions when the AI is unavailable -- and what is it for the period immediately before it went dark, when the advice it was providing may have been shaped by infrastructure degradation nobody had yet detected?
These questions are not hypothetical. The AWS outages in UAE and Bahrain mean they describe a situation that has already occurred. No governance framework in the region provides an answer.
What Current Frameworks Cover, and What They Do Not
Taken together, MENA AI governance frameworks address a coherent set of concerns. Data protection and individual rights are covered by PDPL in the UAE and its equivalent in Saudi Arabia. Ethical principles for AI deployment appear in national AI strategies across the Gulf. Sector-specific risk requirements govern financial services, health, and critical infrastructure in varying degrees of specificity. Procurement and vendor oversight is an emerging area, with Pinsent Masons' analysis of Saudi Arabia's draft Global AI Hub Law and UAE's AI-assisted lawmaking regime representing the most substantive recent practitioner output.
These are serious instruments. The PDPL enforcement regime carries penalties ranging from AED 100,000 to AED 1,000,000, and the DIFC Data Protection Law amendment introduced a private right of action for data subjects. SDAIA's enforcement posture in Saudi Arabia is tightening. The regulatory infrastructure is real and growing.
What the frameworks do not address is the physical layer. Critical infrastructure classification for AI compute -- the question of which AI systems are essential enough to national function that their disruption constitutes a governance event, not merely an operational one -- is absent from every framework in circulation. Business continuity obligations specific to AI-assisted regulatory decisions are not defined. Chain-of-accountability provisions for AI systems that become unavailable, and for the decisions made in their absence, do not exist.
This is the governance gap the strikes exposed. It is not the absence of an AI law. The UAE has extensive and growing AI governance measures. The gap is the absence of a governance category: AI system resilience as a regulatory obligation, distinct from general business continuity and from data protection.
Who Is Most Exposed
The enterprises most exposed to this gap are not the ones that have been ignoring AI governance. Those organizations are exposed to a different and more conventional set of risks under PDPL and sector-specific requirements.
The enterprises most exposed are the ones that built serious governance programs on the assumption that infrastructure continuity is a vendor problem. They have inventoried their AI systems, documented their high-risk use cases, established oversight mechanisms, and complied with PDPL requirements for automated decision-making. They treated physical resilience as outside the governance perimeter -- which is exactly what the frameworks they were following told them to do, because those frameworks did not contemplate the perimeter being breached from outside.
The conflict has not invalidated those governance programs. It has made them incomplete in a way nobody required those organizations to anticipate, and which they now need to address without a regulatory framework to follow.
The practical questions for compliance officers and risk committees in Gulf financial institutions and regulated enterprises are immediate. Where in your AI governance program is sustained infrastructure unavailability -- not a scheduled maintenance window or a vendor outage, but weeks-long disruption due to external action -- addressed at the system level? If your answer points to the business continuity plan, the follow-up question is whether that plan specifies, for each AI-dependent regulated function, what the governance obligations are during the disruption and in the recovery period after.
If it does not specify that, there is work to do. The regulator has not yet asked. The circumstances that would prompt the question have now occurred.
Rabii Agoujgal is an AI governance specialist focused on the EU-MENA regulatory corridor, based in Casablanca. He works with Gulf financial institutions, international development organizations, and government bodies on AI governance strategy and compliance.