top of page

OpenAI, Oracle & SoftBank Launch Five New Stargate AI Data Centers to Scale to 10 GW

OpenAI, Oracle & SoftBank Launch Five New Stargate AI Data Centers to Scale to 10 GW

Stargate AI data centers launch and why it matters

A coordinated push for U.S. AI infrastructure

OpenAI, Oracle and SoftBank announced a joint Stargate AI data centers initiative to build five new U.S. sites to support large-scale AI workloads. The partners unveiled one campus in Texas and framed the program around a headline technical target: a combined 10 gigawatts (GW) of IT power across the five locations. For context, a gigawatt is a unit of power equal to one billion watts — the scale here is comparable to the electrical demand of a small city when servers are running at full tilt.

The announcement is being discussed less as a single site build than as a strategic, multi‑year infrastructure program; media outlets have reported industry estimates that place the overall effort in the hundreds of billions of dollars range, with one figure cited at roughly USD 500 billion across the five locations. Complementing capital plans are vendor arrangements, notably market coverage pointing to a large-scale NVIDIA hardware commitment that would supply dense GPU farms to feed model training and inference workloads.

Key takeaway: Stargate is not just another data center rollout; it's a targeted bet on dedicated, ultra‑dense AI compute capacity in the U.S., backed by heavyweight commercial partnerships.

What’s being built and what 10 GW enables

What’s being built and what 10 GW enables

Purpose-built campuses, compute-first design

The Stargate program is described as five dedicated AI data center sites in the U.S., designed for large-scale training and inference rather than general-purpose colocation. These sites are being engineered around sustained, power‑hungry GPU clusters — the systems used to train and run modern neural networks — with corresponding investment in electrical substations, cooling plant, and low‑latency networking between racks and sites.

What “purpose-built” means in practice: facility layouts and mechanical systems are optimized for racks full of accelerators, not for a mix of CPUs, storage arrays, and variable-tenant workloads. That affects everything from floor loading and raised-plenum design to chilled-water loops and redundancy architecture. The partners are explicit about stacking compute density and operational continuity above general cloud flexibility, which changes the set of engineering trade‑offs.

insight: Designing for sustained 100%‑plus utilization of accelerator clusters forces a different set of reliability, maintenance, and supply chain assumptions than a typical multi‑tenant cloud data center.

Roles across the partnership

The three lead organizations bring distinct capabilities to the table. OpenAI is driving AI workload requirements and the demand signal for sustained GPU-heavy compute. Oracle contributes data center development and operations experience, a commercial cloud platform, and enterprise go-to-market channels. SoftBank is coordinating investment strategy, site selection, and capital mobilization. This division of labor is reminiscent of historically common models in specialized infrastructure projects: an anchor tenant defines the technical requirements, an infrastructure partner builds and operates, and an investor organizes capital.

Hardware ecosystem and systems integration

Market reporting highlights a potential, large NVIDIA hardware commitment intended to populate the GPU farms. While exact counts of accelerators and TFLOPS (tera‑floating point operations per second) have not been disclosed, the implication is clear: the design centers on massive arrays of accelerators connected by high‑bandwidth networking and supported by specialized power distribution and cooling.

Systems integration matters as much as raw chips. Sustained AI training requires continuous power and thermal management, fast interconnects (e.g., high‑speed Ethernet or InfiniBand topologies), and software platforms that orchestrate model parallelism across thousands of devices. The Stargate program appears intended to bundle these layers into a vertically integrated environment for large models.

Key takeaway: Stargate couples site-level engineering with concentrated GPU fleets, aiming to deliver a contiguous environment optimized for long, high‑utilization training runs rather than general cloud elasticity.

Power, hardware scale and real performance

What 10 GW actually represents

SoftBank’s announcement sets a 10 GW IT power target across the five initial Stargate locations. In data center language, “IT power” refers to the electrical capacity allocated to computing equipment (servers, accelerators, storage), not including supporting infrastructure losses. Ten gigawatts is a headline metric that defines the raw ceiling for how many GPUs and how much sustained compute the project can host.

To make that concrete: modern high‑end AI accelerators consume on the order of a few kilowatts per rack or several hundred watts to a few kilowatts per card depending on thermal solutions. Scaling to multi‑GW levels implies tens of thousands of accelerators and thousands of racks, a level of hardware concentration that has few parallels in public cloud history.

Hardware procurement and performance implications

A widely cited market report frames an NVIDIA hardware alliance in the order of ~USD 100 billion to deliver GPU capacity for these sorts of builds. While the dollar figure and exact contract terms are market reporting rather than a partner press release, the scale suggests multi‑year procurement and assembly pipelines. For performance, that hardware density enables:

  • longer continuous training windows at high utilization, which reduces wall‑clock time to convergence;

  • the ability to train models with larger parameter counts because more accelerator memory and interconnect capacity can be marshaled in parallel;

  • reduced end‑to‑end experiment turnaround for hyperparameter sweeps and model architecture iteration.

Still, publicly disclosed TFLOPS totals or accelerator counts for Stargate have not been published, so precise performance outcomes remain speculative.

Additional capacity beyond the initial 10 GW

Oracle and OpenAI have a separate agreement to develop another 4.5 GW of Stargate capacity in the U.S., indicating staged expansion beyond the headline 10 GW. That layered approach — initial anchor capacity followed by expansion agreements — aligns with how major infrastructure projects are typically financed and scaled.

insight: Capacity numbers (GW) tell you the potential ceiling; realized compute depends on delivered accelerators, cooling/performance tuning, and orchestration software.

Bold takeaway: Ten gigawatts defines an extraordinary upper bound on raw compute potential — but the value comes from how that power is converted into usable, efficiently orchestrated GPU hours.

Rollout, investment and access

Rollout, investment and access

Timelines, public milestones and phased delivery

One Stargate site has been publicly unveiled in Texas, with the broader program announcing five U.S. sites. That public unveiling is an early milestone; however, detailed commissioning schedules for all five sites have not been released. Industry observers should expect phased construction, staged equipment deliveries, and progressive commissioning as local permitting and grid upgrades complete.

Large data center projects typically require months to years from groundbreaking to full operation. Given reported multi‑GW targets per site, critical path items will include high‑capacity grid interconnections, transformer and substation work, and long‑lead items like HPC racks and chilled-water systems.

Financing models and the size of the bet

Media reporting has placed broad program estimates around USD 500 billion across the five locations. Whether that exact figure proves accurate, it signals that Stargate is being discussed as a very large capital program with multi‑year deployment costs covering real estate, electrical infrastructure, systems, and hardware purchasing.

Financing will be a mix of partner equity, vendor financing (e.g., supplier credit or staged hardware purchase contracts), and potentially external investors. Vendor supply commitments such as the reported NVIDIA alignment also smooth procurement by providing predictable hardware roadmaps.

Who pays and who can use the capacity

OpenAI is the primary beneficiary for model training and operations, with Oracle’s involvement suggesting enterprise-cloud integration and potential Oracle Cloud pathways for customers. Public reporting has not detailed third‑party tenancy terms, pricing, or wholesale access for other companies. In early phases, capacity is expected to be prioritized for the partners’ workloads.

For external enterprises, the plausible access routes are:

  • Oracle‑mediated cloud services or private‑link offerings backed by Stargate capacity.

  • Commercial arrangements where partner companies buy reserved capacity or managed services.

  • Limited reseller or enterprise agreements if partners choose to monetize spare capacity.

Environmental and permitting constraints that influence rollout

Large power draws and grid interconnection needs mean that permitting, local community engagement, and renewable sourcing decisions will influence timing. The Texas unveiling included local context that highlights how communities and utilities play a central role in schedule and acceptance.

Bold takeaway: Expect a staged rollout driven by grid upgrades, long‑lead hardware procurement, and partner-pricing choices; commercial access for third parties will depend on later productization by Oracle and OpenAI.

How Stargate compares to existing hyperscale AI infrastructure and market impact

How Stargate compares to existing hyperscale AI infrastructure and market impact

Differences in scale, focus and supplier relationships

Wired frames Stargate as a strategic move to consolidate U.S. leadership in AI infrastructure, with a purpose-built posture that differs from mixed‑workload hyperscalers. Traditional hyperscale clouds (for example, broadly multi‑tenant operators) typically balance diverse workloads — web services, databases, enterprise VMs — across multi‑vendor fleets. By contrast, Stargate’s value proposition is concentrated: it aims to deliver ultra‑dense, accelerator-first capacity for a narrower set of workloads tied to cutting‑edge AI training and inference.

A reported large NVIDIA hardware pact reinforces this concentration: Tom’s Hardware coverage suggests a tight NVIDIA alignment for GPU supply. That vertical concentration can deliver advantages — lower per‑unit costs through scale, simplified software optimization for a single accelerator architecture — but it also raises supplier dependency risk and reduces platform heterogeneity.

Market effects and pressure on competitors

A network of high-density, privileged compute campuses could reshape supply and pricing dynamics for high-end GPUs. Smaller cloud providers and colocation shops may find it harder to secure latest‑generation accelerators or to compete on price for massive reserved blocks of GPU hours. This may accelerate the trend toward a two‑tier marketplace: a set of ultra‑dense, vertically integrated facilities optimized for giant model training alongside a broader cloud ecosystem serving more diverse workloads.

For enterprises and developers, the practical implications include:

  • Faster iteration cycles and larger model experiments for users with access to Stargate-backed resources.

  • Potentially lower marginal costs for very large training runs if partners pass through efficiency gains.

  • A need for software portability strategies: models and tooling optimized for the accelerator types and interconnects used in Stargate may require adaptation to run efficiently elsewhere.

Bold takeaway: Stargate’s concentrated approach amplifies capacity and speed for its users, while shifting market pressure onto competitors that lack similar scale or supplier commitments.

FAQ

Q1: What exactly are the Stargate AI data centers?

Q2: How big is the investment and who’s paying?

Q3: Will NVIDIA supply hardware for Stargate?

Q4: When will the other Stargate sites be online?

Q5: Can other companies buy capacity at Stargate?

Q6: What about environmental impact and sustainability commitments?

Stargate AI data centers and what’s next for infrastructure and the AI ecosystem

Stargate AI data centers and what’s next for infrastructure and the AI ecosystem

Putting the pieces together and looking forward

Stargate’s five‑site, 10 GW ambition signals a decisive shift toward vertically integrated, AI‑only infrastructure in the United States. The combination of an anchor tenant with intense compute needs (OpenAI), a cloud and operations partner (Oracle), and a capital organizer (SoftBank) creates a powerful engine for delivering dense GPU capacity at scale. If delivered as reported, the program will materially change how large models are trained: faster turnaround, larger experiments, and potentially lower marginal costs for massive compute runs.

In the coming years, watch three interlinked dimensions:

  • Delivery and timing: grid readiness, local permitting, and hardware shipments will determine when capacity becomes usable.

  • Commercial productization: whether and how Oracle or the partners expose capacity to broader enterprise customers will shape market reach beyond the partner ecosystem.

  • Sustainability and community acceptance: renewable sourcing, efficient cooling technologies, and transparent community engagement will influence both operational viability and public legitimacy.

There are trade-offs. The concentration of hardware and supplier relationships can produce efficiency and speed for those inside the ecosystem, but it also centralizes supply-chain risk and raises questions about market access for others. The technical wins — more GPU hours, higher utilization, tighter software‑hardware optimization — may come with economic and political consequences: a smaller circle of organizations wielding an outsized fraction of production‑grade training compute.

For developers, enterprises, and policy thinkers, Stargate presents both opportunity and a call to adapt. Enterprises should consider strategies that include:

  • Monitoring Oracle’s offerings for Stargate-backed services that could unlock large‑scale training without in‑house data center investment.

  • Preparing models and tooling for portability, recognizing that an accelerator-optimized stack today may need to run across heterogeneous clouds tomorrow.

  • Engaging with local and national conversations about energy, resilience, and competition in AI infrastructure to shape equitable outcomes.

Final thought: Whether Stargate becomes a broad accelerator for AI innovation or a consolidation of compute power for a few leaders will hinge on rollout execution, pricing and access choices, and how transparently the partners address environmental impacts and supply-chain resilience. For observers and participants alike, the next two to five years will reveal whether this program democratizes large‑scale AI by making capacity effectively available, or whether it locks high‑end training behind a new kind of gated infrastructure. Either way, Stargate marks a major moment in the evolution of AI hardware strategy — one that organizations should plan around, not around which they be surprised.

Get started for free

A local first AI Assistant w/ Personal Knowledge Management

For better AI experience,

remio only runs on Apple silicon (M Chip) currently

​Add Search Bar in Your Brain

Just Ask remio

Remember Everything

Organize Nothing

bottom of page