Can the GCC Build the Third AI Option?
← Back to Analysis

Can the GCC Build the Third AI Option?

Gulf states are applying the Jebel Ali entrepôt logic to artificial intelligence—building data centers sized not for their own populations, but to host the inference infrastructure through which billions across the Global South may access frontier models.

When Dubai built Jebel Ali in the 1970s, the UAE did not need a port that large. The domestic economy was not yet ready to absorb the volume of trade it was sized to receive, and the Gulf’s population was a fraction of what it is today. The project was met with widespread skepticism. What the late Sheikh Rashid bin Saeed Al Maktoum envisioned was an entrepôt—a transit hub capturing economic value not by producing goods but by controlling the node through which others must pass. Jebel Ali became one of the world’s busiest ports not because the UAE is large, but because Sheikh Rashid understood that control over strategic trade nodes has a compound effect. The rents, economic ties, and leverage it generates accumulate and reinforce each other across decades, creating political, economic, and financial capital. Dubai’s current status reflects the bet paid off. The UAE did not need to manufacture the goods moving through Jebel Ali, it just needed to create the infrastructure needed to be indispensable to those who did.

The Gulf states are now applying that logic to artificial intelligence. The data centers rising across Abu Dhabi and Riyadh are not built to serve only the 60 million GCC residents. They are being built to position the GCC as the world’s AI entrepôt, the pathway through which three to four billion people across South Asia, Southeast Asia, and Africa access frontier AI inference. While some nations are concerned about getting stuck between the U.S. and China over AI, the GCC may instead create a third option.

The Inference Layer as Strategic Terrain

Much of the discourse on GCC AI investment is organized around a potentially misleading frame. Which state is spending more, which national champion has secured the better NVIDIA partnership, whether Abu Dhabi or Riyadh will claim regional AI primacy—these questions matter, but they risk obscuring the more consequential one: what the Gulf is actually building. Gulf states are not trying to out-train OpenAI. Saudi Arabia and the UAE want frontier model capability of their own, and American models underpin most Gulf AI innovation today—a dependency unlikely to disappear soon. But the pursuit of AI sovereignty may ultimately prove less about developing frontier models than about owning the infrastructure through which frontier AI will likely reach much of the Global South. In doing so, the GCC risks positioning itself—as node of structural intermediacy—between the world’s two dominant AI powers.

The distinction between training frontier models and deploying them at inference is important. Training frontier models requires concentrations of compute, talent, and capital that only the United States and China can currently sustain at the frontier. The UAE and Saudi Arabia want to make the GCC into the third AI hub. Inference, meanwhile, is the deployment of an existing frontier model across a range of functions and outputs at scale—a state running Anthropic’s Claude or DeepSeek to streamline government services, financial systems, healthcare infrastructure, or logistics networks. This is where economic and political value accrues to end users. The capital requirements, while large, are achievable for middle powers with sufficient financial reserves. The strategic upside, however, may be disproportionate to that investment.

Whoever hosts the inference layer determines which models run, whose data they process, and under what governance terms. For a nation like Qatar or Kuwait, with large capital reserves, controlling that layer may reinforce domestic oversight of AI usage within their own borders. For a nation that lacks inference infrastructure entirely, utilizing any frontier AI model for inference means accepting another state’s influence over all of those factors simultaneously. That is how AI infrastructure creates structural power over how artificial intelligence mediates the decisions of the developing world. Today the credible options for AI inference deployment at scale are largely China and the United States. The U.S. restricts who has access to its chips, infrastructure, and frontier models to a few key partner countries, while China openly offers its models and infrastructure to help countries scale AI in their own jurisdictions. Most countries may want U.S. AI stacks, but be unable to access them. Thus, they only have a second-best, but still strong, China option. The GCC may be able to position itself as a third option.

The geographic case for Gulf AI intermediacy seems compelling on its own terms. The region sits at the intersection of submarine cable routes connecting Europe, South Asia, Southeast Asia, and Africa. Governing institutions across those same regions—government ministries, central banks, healthcare systems, enterprises managing sensitive national data—face a common dilemma. Routing AI workloads through Chinese or American infrastructure creates a different but symmetrical set of dependencies. Many of these states have enacted or are building data-sovereignty frameworks that require sensitive workloads to be processed outside great-power jurisdiction—yet they lack the domestic infrastructure to fulfill that requirement independently.

Whoever governs the node through which states access frontier AI models can see what governments are building, what institutions are automating, and what populations are asking—and can sever that access entirely if the political moment demands it.

The GCC pitch is that a credible Gulf inference entrepôt could resolve the dilemma. Gulf data centers may offer countries across Africa and the Middle East affordable, high-quality AI infrastructure—likely built on American models and aligned with U.S. standards—including systems with meaningful Arabic-language capability, processed outside the United States and China, with the geographic proximity to deliver real performance advantages to the markets that matter most. This, however, may not be acceptable to U.S. lawmakers if it exposes U.S. systems to potential compute arbitrage by rivals who want access to U.S. systems without needing physical hardware.

Node Control and Weaponized Interdependence

This position is best understood through the framework of weaponized interdependence, developed by Henry Farrell and Abraham Newman. Their core argument is that global economic networks grow asymmetrically. Certain nodes—the hubs through which disproportionate flows of goods, capital, or data pass, like ports, financial clearing houses, or internet exchanges—accumulate far more connections than others. Whoever controls those nodes holds disproportionate power over everyone who depends on them.

That power operates through two mechanisms. The panopticon effect describes the capacity to monitor traffic flowing through a controlled node, enabling the controlling actor to gather intelligence on who is doing what and with whom. The chokepoint effect describes the capacity to restrict or shut down access entirely, cutting off any actor who steps out of line. Both dynamics are structural leverage endemic to AI inference infrastructure. Whoever governs the node through which states access frontier AI models can see what governments are building, what institutions are automating, and what populations are asking. And it offers a kill switch to sever that access entirely if the political moment demands it.

This is why the U.S.–China competition for Gulf AI infrastructure is critical. A realist view of international relations would assert that AI deployment is ultimately about advancing national interest for each actor. The U.S. deploys its AI stack to the Gulf to secure strategic advantages as the foundation of the GCC’s AI ecosystem—crowding out Chinese alternatives in the process. The GCC, in turn, deploys it to build advanced AI systems which it can leverage to entrench its position as the indispensable node between great-power AI ecosystems and the developing world. A genuinely neutral Gulf AI entrepôt sitting between Gulf-deployed American models and other state users would not merely redistribute commercial rents—it would intermediate, and potentially neutralize, the panopticon effect that current U.S. AI dominance confers.

A hub that processes the AI workloads of Global South governments and enterprises outside American or Chinese jurisdiction denies both powers the informational advantages that direct infrastructure control provides. The Gulf’s AI build is, from this vantage point, both a technology investment and a bid for a new form of geopolitical leverage—one that neither great power is inclined to concede without extracting significant concessions in return.

The picture is more complicated for Washington than for Beijing. Gulf AI infrastructure is set to run predominantly on American models, chips, and cloud architecture—which means the U.S. retains a degree of visibility and leverage that a Chinese-built alternative would not preserve. But that dependency cuts both ways. The Gulf needs American technology to build credibly at scale. Washington needs Gulf infrastructure to extend its AI ecosystem into markets it cannot reach directly. The result is less adversarial than interdependent. What it also means, however, is that the Gulf’s claim to genuine neutrality is qualified from the start—a hub built on American technology is never fully outside American reach, whatever its geographic address suggests. That has ripple effects on how AI sovereignty is defined in the long run.

The Conceptual Third Option

China may ultimately be the winner across many parts of the world where U.S. AI infrastructure and technology remain inaccessible. If Washington does not permit the export of its AI tech stack to Africa, the Middle East, and Central Asia—even at the inference layer—Beijing will enter those markets as the frontrunner. It is arguably already there. For countries concerned about locking into Chinese infrastructure, however, a conceptual Gulf AI model could represent an acceptable third option: regionally grounded, Arabic-capable, and governed outside the jurisdiction of either great power.

A Gulf-trained frontier model—built on deep regional data, fine-tuned for Middle Eastern legal, financial, and cultural contexts—could offer other countries something neither Washington nor Beijing currently provides: high-performance AI that is linguistically and culturally proximate, priced for markets that American and Chinese hyperscalers have historically underserved, and governed on terms that smaller states may find more acceptable.

China’s AI advantage in the developing world rests largely on governance flexibility and price—not necessarily model quality or cultural fit. A credible Gulf frontier model could contest both the markets where Chinese infrastructure is already entrenched and the markets where it has not yet taken hold. The GCC would not be channeling American AI models to the developing world, but instead offering its own built on American hardware.

The dependency question does not disappear entirely. Gulf frontier models would still be trained on American chips, and that upstream relationship preserves U.S. leverage. But the nature of the dependency changes. A Gulf inference hub running on American models is structurally subordinate to U.S. leverage. A Gulf state like Abu Dhabi running its own frontier model on American chips has greater autonomy over what the model does, whose data it trains on, and under what terms it is deployed. The chips remain American. The AI, increasingly, would not be.

Understanding Gulf Leverage

Saudi Arabia and the UAE’s leverage in the AI race rests on America’s own infrastructure gaps, namely energy. The U.S. faces a projected AI data-center power demand of 130 gigawatts by 2030 that substantially exceeds its available domestic generation capacity under current development timelines. GCC countries are betting that a significant share of U.S. AI computation infrastructure will migrate offshore, and they want to capture it. The Gulf’s structural advantages—cheap and abundant energy, desalination capacity for data-center cooling, established connectivity infrastructure—make the region an attractive location for AI infrastructure and functionally necessary for sustaining American AI ambitions.

Much of this relies on GCC interstate coordination. Mohammed Soliman argues that the region should focus on the development and expansion of energy infrastructure with the objective of achieving grid and data integration to meet AI demand—a unified framework that would sharpen the Gulf’s potential to emerge as the third AI hub globally. The geopolitical realities of the GCC, however, often lean toward internal competition, particularly between Riyadh and Abu Dhabi. That rivalry is not merely diplomatic—it shapes investment priorities, infrastructure decisions, and the degree to which Gulf states are willing to subordinate national AI ambitions to a collective regional strategy. Otherwise, both nations will be racing to develop their own AI entrepôt which, if realized by either or both, could create more optionality for Global South customers but also greater fractionation of AI models and infrastructure across the region. A unified GCC AI framework remains the condition under which the third option becomes most credible.

Where the Entrepôt Argument Is Contested

The entrepôt thesis is analytically compelling, but it rests on assumptions that deserve scrutiny. Four vulnerabilities stand out.

First, AI models are not neutral. By nature, they embed the legal assumptions, cultural frameworks, and epistemic priorities of their training context. The Gulf’s entrepôt position could be reframed as a question of whose non-neutrality to host.

Second, there are limits to the Farrell–Newman framework as applied here. Weaponized interdependence was built to explain how great powers exploit nodes—not how middle powers use nodes to resist great-power pressure. The historical cases cut predominantly in the other direction. The Gulf is attempting something the framework does not strongly predict is achievable: durable autonomy at a contested hub between two competing hegemons. The G42 divestiture from China is the first data point, and it does not favor the neutrality thesis.

Third, Global South demand is uncertain. The argument presupposes that governments and institutions are actively seeking an AI inference host outside great-power jurisdiction. That demand seems plausible, but maybe not to the scale necessary to finance a fully built-out Gulf “third option.” Many countries in recent years have gone directly to Chinese providers because either the U.S. option was not available or the Chinese option was more accessible due to price and fewer governance constraints. The intra-GCC dynamic introduces a further complication: both Saudi Arabia and the UAE are moving firmly into the American orbit. If both states build major inference infrastructure with divergent great-power alignments, the singular Gulf entrepôt thesis fractures into two competing nodes—undermining the collective leverage outlined above.

Finally, there is the question of where AI itself is heading. The entrepôt model rests on an implicit assumption: that AI inference will continue to require large, centralized data centers that smaller states must access through regional hubs. That assumption may not hold. The broader trajectory of AI development points in the opposite direction—toward smaller models, open-weight architectures, and deployment on modest local hardware. If inference increasingly migrates to the device or institutional level, the demand for centralized Gulf infrastructure could erode before it reaches full utilization. The Gulf’s entrepôt bet is, in part, a bet on continued centralization. That is a reasonable bet today. It is less obviously reasonable across the decade-long investment horizon the current buildout requires.

Conclusion

The Gulf does not need to train the world’s most capable models to occupy a position of structural consequence in international relations. The entrepôt logic requires only that the world cannot route around it. But the weaponized interdependence framework identifies a structural problem for Gulf Arab states as middle powers. Building an AI hub between two competing great powers does not mean the GCC transcends the existing competition—on the contrary, it may become the competition’s object.

Both Washington and Beijing face structural incentives to ensure that Gulf AI infrastructure serves their respective panopticon interests: running their models, adopting their governance frameworks, excluding the other’s hardware and firms. A Gulf AI entrepôt that maintains genuine neutrality may be seen as strategically threatening by both powers simultaneously. The G42 divestiture represents the first visible instance of this pressure succeeding at scale. How long Gulf neutrality can last, especially as Washington and Beijing compete for global adoption of their respective AI ecosystems, remains the central unanswered question hanging over the entire enterprise.