← Back to live globe
UC26 · AI Infrastructure Race

AI & Data Center Infrastructure Race

82 hyperscale data centers across 12 operators rendered as 3D points on a globe.gl WebGL globe, connected by 30 animated undersea cable arcs. Point altitude encodes power capacity in MW. Country borders light up on hover and fly you to any nation on click. Filter by operator or status, toggle the Stargate pulse overlay, and click any point for full facility details.

82

Hyperscale facilities

~15K MW

Total MW tracked

~8M

H100-equivalent GPU units

12

Cloud operators

30

Undersea cable routes

$500B

Stargate investment

2025

Data vintage

globe.gl

Rendering engine

Data Pipeline & Architecture

01

Hyperscale Data Center Dataset

82 hyperscale data centers across 12 operators — Microsoft/OpenAI (including Stargate sites), Google/DeepMind, Amazon/AWS, Meta, Alibaba/Aliyun, ByteDance/TikTok, Tencent, Baidu, Oracle, Apple, IBM, and Chinese national AI clusters (Guizhou Gui'an, Inner Mongolia, Xinjiang, Lanzhou). Each record carries power capacity in MW, estimated H100-equivalent GPU units, operational status, open year, and an AI-focused flag. Values are approximate but grounded in public announcements, SEC filings, and infrastructure reporting as of early 2025. Click any country border to see the facilities and cables in that country.

82 facilities12 operatorsMW capacityGPU unit estimatesOperational statusAI-focused flag
02

Undersea Cable Network

30 undersea cable routes that carry the majority of intercontinental AI inference and training traffic — including Marea (Microsoft/Meta transatlantic), PEACE (Singapore–Marseille), FASTER (Google transpacific), Havfrue (Facebook/Google North Atlantic), 2Africa (Meta), Apricot (Google/Meta Asia-Pacific), Blue-Raman (Google Europe–India), Echo (Google/Meta Pacific), and newer routes like Bifrost, Dunant, Grace Hopper, and Equiano. Each arc records operators, capacity in Tbps, and lay year. Arc width in the visualization scales with log₂(capacity) to reflect relative bandwidth.

30 cable routesGreat-circle arcsWidth ∝ log₂(Tbps)Operator attributionLay year
03

globe.gl Rendering & Country Borders

The visualization uses globe.gl — a Three.js-based WebGL globe that renders satellite imagery, topology bump maps, and atmosphere effects. Data centers are rendered as 3D points (pointsData) with altitude = log₁₀(MW)/log₁₀(maxMW) × 0.5, making capacity visually proportional to height. Animated cable arcs use arcDashAnimateTime for a flowing cable effect. Country polygons (polygonsData) from a 110m-resolution GeoJSON enable hover highlighting and click-to-select with fly-to animation. The globe auto-rotates and supports damped panning.

globe.gl pointsDataarcsData animatedpolygonsData bordersThree.js WebGLfly-to animations
04

Stargate Highlight & Pulse

When the Stargate toggle is active, globe.gl's ringsData API renders expanding ring pulses over all Stargate Phase 1 sites (Abilene TX 600MW, Iowa 500MW). The ring radius and color opacity oscillate via a setInterval-driven sin-wave, creating a breathing purple glow that visually communicates the scale and importance of the $500B OpenAI/Microsoft/SoftBank investment — the largest announced AI infrastructure commitment in history.

setInterval pulsesin-wave amplituderingsData APIPurple glow rings$500B Stargate
05

Operator Filters & Leaderboard

Operator filter buttons let users isolate individual cloud providers on the globe. Each button is colour-coded with the operator's brand colour (Microsoft blue, AWS orange, Meta blue, Alibaba orange, ByteDance cyan, etc.). Status filters (operational / under-construction / announced) layer on top. The right-side leaderboard ranks operators by total tracked MW capacity with proportional bar charts — revealing that the US hyperscalers collectively dominate global AI compute capacity, with Chinese players close behind in raw facility count.

12-operator filterStatus filterMW leaderboardBrand-colour codingReal-time layer update

Key Insights

Compute gravity

Three clusters — AWS Ashburn (700MW), Google The Dalles (600MW), and China's Guizhou Gui'an (800MW) — account for roughly 15% of all tracked capacity. The towers over northern Virginia and coastal Oregon visually dominate the North American coastline.

The Stargate bet

The OpenAI/Microsoft/SoftBank Stargate project announced in January 2025 commits $500B to US AI infrastructure over four years. The first-phase sites in Abilene TX (600MW) and Iowa (500MW) are shown with pulsing purple rings when the Stargate overlay is enabled — representing the largest single AI infrastructure investment in history.

Undersea AI highways

The same undersea cables that carry streaming video and financial trades now route hundreds of petabytes of AI inference requests. Google alone owns or co-owns 12 of the 30 cable routes shown, ensuring low-latency connectivity between its TPU pods and end users on every continent.

China's parallel stack

Chinese operators — Alibaba, Tencent, Baidu, ByteDance — operate a largely separate AI infrastructure stack anchored by massive national data center parks in Guizhou, Inner Mongolia, and Xinjiang. The Xinjiang facility is flagged as controversial due to ongoing international scrutiny. Together, Chinese facilities represent over 30% of tracked global MW.

$

Stargate — $500 Billion AI Infrastructure Commitment

Announced on January 21, 2025, the Stargate Project is a joint venture between OpenAI, Microsoft, and SoftBank committing $500 billion to US AI infrastructure over four years. The initial tranche of $100 billion funds data centers in Abilene TX (600MW, largest disclosed) and Iowa (500MW), with GPU clusters sourced from NVIDIA, AMD, and custom Microsoft Maia chips.

The project signals a strategic shift — from renting cloud capacity to owning dedicated AI infrastructure at nation-state scale. At peak build-out the Stargate campuses collectively exceed the compute capacity of any single country's current AI deployment.

OpenAIMicrosoftSoftBankNVIDIAOracle$500B2025–2029

Tech Stack

Globe engine

globe.gl (Three.js WebGL)

Data center layer

pointsData — altitude = log₁₀(MW) / log₁₀(maxMW) × 0.5

Cable layer

arcsData — animated dash, width ∝ log₂(Tbps)

Country borders

polygonsData — hover highlight + click fly-to

Pulse layer

ringsData — setInterval sin-wave Stargate rings

Framework

Next.js 'use client', React 19

Data

Hardcoded — public announcements & filings 2025

Operators

12: Microsoft, Google, Amazon, Meta, Alibaba, ByteDance, Tencent, Baidu, Oracle, Apple, IBM, Other

Data Notes

All data center records use publicly disclosed or credibly estimated values as of early 2025. MW capacity figures represent total IT load, not gross building power. GPU unit estimates assume H100 SXM5 as the reference (700W TDP, 80GB HBM3) and are rounded to the nearest thousand. Actual deployments include a mix of H100, A100, H200, Gaudi, and custom ASICs.

The Xinjiang data center is included for geographic completeness with a project note flagging international scrutiny. Undersea cable capacity figures are design capacity at time of laying; lit capacity varies by operator and is not publicly disclosed.

Sources: company earnings calls, SEC 10-K filings, TeleGeography Submarine Cable Map, DC Byte, Datacenter Dynamics, Bloomberg Intelligence, PitchBook AI infrastructure reports.