top of page

AMD and IBM's CEO doesn't see an AI bubble, just $8 trillion in data centers

AMD and IBM's CEO doesn't see an AI bubble, just $8 trillion in data centers

The conversation surrounding the current explosion in artificial intelligence valuation has hit a fever pitch. On one side, you have the titans of industry—specifically the CEOs of AMD and IBM—arguing that we are merely at the starting line of a necessary infrastructure build-out. On the other, a growing chorus of investors and technologists suggest we are deep inside an AI bubble, fueled by unsustainable spending and hype that outpaces reality.

This disconnect was highlighted recently when AMD CEO Lisa Su and IBM CEO Arvind Krishna spoke about the future of computing. Their stance is clear: the market isn't overheating; it’s under-provisioned. However, when you look closer at the numbers they are proposing—specifically a global spend nearing 8 trillion dollars—the math begins to look less like a business plan and more like a gamble.

Corporate Optimism Meets Market Skepticism

Corporate Optimism Meets Market Skepticism

The primary defense against the AI bubble narrative offered by chip manufacturers is historical precedent. They liken the current spending spree to the build-out of the internet or the mobile network. The argument is that you cannot have the applications (the software revenue) without first laying the physical rails (the hardware).

Arvind Krishna of IBM provided a specific metric that has since circulated widely in financial discussions. He noted that building a single AI-ready data center with a capacity of one gigawatt (1 GW) costs roughly $800 billion to $1 trillion? No, the figures are slightly more grounded but still staggering—recent estimates place a 1 GW facility in the tens of billions, but the aggregate global goal discussed by industry leaders targets a total capacity that would require an estimated 8 trillion dollars in total infrastructure investment.

This puts companies like AMD and IBM in a convenient position. If the world decides it needs to spend that kind of money on hardware, the chipmakers win regardless of whether the software companies ever turn a profit.

This dynamic has revived the classic "selling shovels" metaphor. In the gold rush, the people who got rich weren't necessarily the ones digging in the dirt; it was the merchants selling picks and shovels. Skeptics argue that AMD, NVIDIA, and IBM are simply the shovel salesmen of the 21st century. Their denial of an AI bubble is structurally necessary for their stock prices. If they admitted that demand might taper off, their valuations would collapse. Therefore, their bullishness should be read as a sales pitch rather than an objective market analysis.

The Economics of Expanding Data Centers

The Economics of Expanding Data Centers

To understand why the skeptics are worried, you have to look at the physical reality of what is being built. Data centers are not vague clouds of code; they are massive industrial facilities filled with silicon, copper, and cooling systems.

The Data Centers Cost Reality: From Millions to Trillions

Historically, a data center was a costly but manageable expense for a tech company. The pivot to Generative AI has changed the unit economics entirely. AI training and inference require GPUs (Graphics Processing Units) or NPUs (Neural Processing Units) that are significantly more expensive and power-hungry than traditional CPUs.

When IBM’s leadership talks about needing 100 gigawatts of capacity globally, they are talking about an industrial mobilization that rivals national defense budgets. The hardware involved is specialized. Unlike a standard server that can easily be repurposed for hosting websites or database management, high-end AI racks are optimized for matrix multiplication. They are number-crunching beasts.

If the demand for data centers continues at this pace, the supply chain for everything from advanced packaging to high-bandwidth memory will remain strained for years. This scarcity keeps prices high, benefiting the suppliers. However, if the demand is artificial—driven by FOMO (Fear Of Missing Out) rather than actual end-user revenue—the eventual correction will leave trillions of dollars of hardware sitting idle.

Justifying the 8 Trillion Dollar Bill

The figure of 8 trillion dollars is difficult to comprehend. To put it in perspective, that amount is roughly four times the entire market capitalization of Amazon. For the industry to justify an infrastructure investment of that magnitude, the revenue generated by AI services must be astronomical.

This is where the math gets messy. If companies borrow money or dilute equity to spend $8 trillion, they are not just on the hook for the principal; they are on the hook for the interest and the expected returns of shareholders.

Discussions among skeptical observers point out that to service a capital expenditure of that size, the AI sector would need to generate hundreds of billions in net profit annually just to break even on the hardware costs. Currently, while revenue is growing, few companies outside of the hardware sellers are showing profit margins that justify this level of spend. The "application layer" of AI is still struggling to find use cases that customers will pay high premiums for at scale.

Analyzing the ROI Disconnect

The core of the AI bubble argument is the Return on Investment (ROI). Tech history is littered with technologies that were revolutionary but failed to make money for the early investors. The fiber optic boom of the late 90s is a frequent comparison. We eventually used the fiber, but the companies that laid it went bankrupt first.

The Interest Trap and Infrastructure Investment

Capital is no longer free. In the era of near-zero interest rates, betting billions on experimental data centers was less risky. Today, the cost of capital is real.

When CEOs call for trillions in infrastructure investment, they are asking the market to front-load the cost of a revolution that hasn't fully materialized. If the "killer app" for AI turns out to be a slightly better chatbot or more efficient coding assistants, the value creation might be in the billions, not the trillions.

The gap between the cost of the infrastructure and the value of the output is what creates a bubble. If you build a factory capable of producing a million cars a day, but the market only buys fifty thousand, your factory isn't an asset; it's a liability. The same logic applies to data centers.

Hardware Depreciation in a Saturating Market

There is a technical detail that often escapes general financial reporting: silicon ages poorly. Hardware depreciation in the AI sector is brutally fast.

A user on Reddit pointed out a critical risk: what happens to the secondary market? If the bubble bursts or simply deflates, the market could be flooded with slightly used H100s or MI300s. Unlike real estate, which generally holds value, a GPU is a depreciating asset. A three-year-old AI chip is significantly less efficient than a new one.

If companies like Meta or Microsoft decide they overbuilt and try to liquidate their inventory, the price of compute would crash. This would be catastrophic for AMD and NVIDIA, as their new chips would be competing against their own previous generation selling for pennies on the dollar. The "shovel sellers" only win as long as the miners keep buying new shovels. If the miners quit and sell their used tools, the shovel factory goes under.

Technical and Physical Constraints

Technical and Physical Constraints

Beyond the money, there are physical limits to how much we can build. The scale of data centers proposed by industry leaders hits a hard wall when it comes to power.

Energy Consumption at the Gigawatt Scale

Energy consumption is quickly becoming the bottleneck for AI scaling. A 1 GW data center requires the equivalent output of a dedicated nuclear reactor or a massive solar farm. The U.S. power grid is already strained in many regions.

Connecting these facilities to the grid takes years of regulatory approval and construction. We are seeing tech companies looking into small modular nuclear reactors (SMRs) just to guarantee uptime. This adds another layer of cost and complexity.

Critics argue that burning gigawatts of electricity to generate AI art or write marketing emails is a misallocation of resources. If the societal value of the output doesn't match the environmental and financial cost of the energy consumption, regulation could step in to throttle growth, popping the bubble from the outside.

Specialized Hardware vs. General Utility

Another technical point raised in community discussions is the specificity of the hardware. The chips filling these data centers are highly specialized for AI workloads.

If the AI hype dies down, these chips are not easily repurposed for general computing tasks like hosting web servers or running standard corporate databases. They are matrix multiplication engines. If you don't need to multiply massive matrices, they are expensive paperweights.

This lack of flexibility makes the infrastructure investment riskier than previous tech cycles. When the dot-com bubble burst, the servers could still be used for other things. When the crypto mining crash happened, GPUs flooded the gaming market. But enterprise-grade AI clusters are difficult to break up and sell to consumers. The risk of total asset write-down is higher.

The Verdict: Is the AI Bubble About to Burst?

The denial of an AI bubble by CEOs of hardware companies is expected. It is their job to sell the vision that drives demand for their products. However, the disconnect between an 8 trillion dollar expenditure plan and the current revenue reality of AI software is widening.

We are seeing a classic disconnect between the "infrastructure layer" (chips/data centers) and the "application layer" (software/services). The infrastructure is being built as if General Artificial Intelligence (AGI) is imminent. Yet, even optimists like IBM's CEO admit that the probability of achieving true AGI soon is low.

The market is currently pricing in a best-case scenario where AI transforms every sector of the global economy simultaneously. If the reality falls short—if AI ends up being a powerful tool rather than a replacement for human intelligence—the valuation of the hardware ecosystem will have to correct.

The shovels have been sold at premium prices. Now, the miners have to find the gold. If they don't find 8 trillion dollars' worth of it soon, the shovel sellers will be the last ones left holding the bag of depreciated silicon.

Frequently Asked Questions

Why do some experts call the current AI market a "selling shovels" scenario?

This refers to the 1849 Gold Rush, where merchants selling tools made consistent profits while most gold miners failed. In the AI context, companies like NVIDIA and AMD (selling chips) are profiting massively, while the companies buying the chips (the miners) have yet to prove they can generate profits that justify the hardware costs.

What is the estimated cost of the proposed global AI infrastructure?

Industry leaders, including executives from IBM and AMD, have discussed a roadmap that could require up to $8 trillion in investment. This includes the construction of massive data centers, energy grid upgrades, and the procurement of specialized semiconductors over the next several years.

Why is energy consumption a major concern for AI data centers?

AI-focused data centers require significantly more power than traditional server farms due to the heat and energy intensity of GPU calculations. Proposals for new gigawatt-scale facilities would require power equivalent to small cities or dedicated power plants, straining existing electrical grids and raising environmental concerns.

Can AI hardware be repurposed if the bubble bursts?

It is difficult. High-end AI chips are specialized for matrix multiplication and deep learning tasks. While they can handle some high-performance computing workloads, they are not efficient for general-purpose web hosting or standard office tasks, meaning their resale value would likely crash if the AI market contracts.

Is there a risk of hardware depreciation in AI investments?

Yes, hardware depreciation is a significant risk. Semiconductor technology advances rapidly, making chips obsolete within 3 to 5 years. If companies spend billions on current-generation chips but fail to monetize them quickly, the hardware will lose most of its value before it pays for itself.

Get started for free

A local first AI Assistant w/ Personal Knowledge Management

For better AI experience,

remio only supports Windows 10+ (x64) and M-Chip Macs currently.

​Add Search Bar in Your Brain

Just Ask remio

Remember Everything

Organize Nothing

bottom of page