Why China Can Run AI 40% Cheaper Than America

·Yao Di

Everyone's talking about who has the best AI model.

Joe Tsai is talking about something else: who can run AI the cheapest.

The Alibaba Chairman gave a lecture at Hong Kong University that flipped the entire AI conversation on its head. While everyone else is obsessing over benchmarks and model performance, he's focused on the economics of actually deploying these things at scale.

And here's what he revealed: China has a massive infrastructure advantage that nobody's talking about.

I'm not going to tell you that "China is winning the AI race" (that's too simplistic). I'm not going to tell you that "model quality doesn't matter" (it does). Instead, I'm going to show you why the real AI race isn't about training the smartest model—it's about lowering the cost of inference.

Here's what I learned from watching one of the world's most successful tech executives explain why infrastructure economics might matter more than model architecture.

I. The 60% Construction Advantage

Let me start with a number that should terrify American AI companies.

Building a data center in China costs approximately 60% less than in the US.

Not 10% less. Not 20% less. 60% less.

(And that's excluding chips and servers—just the physical infrastructure.)

Think about what that means.

If you're an American company, you need to spend $100 million to build a data center. Your Chinese competitor spends $40 million for the same capacity.

That's not a small advantage. That's a structural moat.

The Energy Cost Gap

But it gets worse.

Industrial electricity costs in China are ~40% lower than in the US.

This isn't temporary. It's the result of a massive grid modernization push over the last 15 years.

So you're paying 60% more to build your data center. And then you're paying 40% more to run it.

Every. Single. Day.

This creates a longer runway for scaling inference and training facilities. You can afford to run more experiments. Train more models. Serve more users.

While your American competitor is watching their burn rate.

II. The Forced Innovation

Now here's where it gets interesting.

You'd think GPU restrictions would cripple Chinese AI development, right?

Wrong.

Tsai described something fascinating: when you can't access top-tier hardware, you're forced to optimize at the system level.

The "starvation" of raw compute is forcing a higher degree of optimization in software-hardware co-design.

Think about it.

American companies have unlimited access to H100s and Blackwells. So what do they do? They throw more compute at the problem.

Chinese companies don't have that luxury. So they have to get creative.

They optimize their architectures. They improve their algorithms. They squeeze every ounce of performance out of the hardware they have.

This is the paradox of constraints: sometimes having less forces you to be better.

III. The Open-Source Strategy

Now let me explain why Alibaba is prioritizing open-source models.

(This is where the business logic gets really interesting.)

Most people think open-source means "giving away your competitive advantage." But Tsai explained the actual business case:

Sovereign AI

Enterprises want to run models on private clouds or on-premise. For data security. For regulatory compliance. For control.

If your model is closed-source, they can't do that. They're dependent on your API. Your infrastructure. Your pricing.

But if your model is open-source, they can run it anywhere.

And here's the key: the monetization isn't in the model itself. It's in the cloud services required to run it.

You give away the model. You sell the infrastructure.

The Economics

Think about this business model.

You spend money training a model. Then you open-source it. Now every company in the world can use it.

But to actually run it at scale, they need cloud infrastructure. Compute. Storage. Networking.

And who provides that? Alibaba Cloud.

You're not selling the model. You're selling the platform to run the model.

That's brilliant.

IV. The Talent Advantage

Here's something Tsai mentioned that most people overlook.

A significant portion of global AI research talent has educational roots in China.

This creates a bilingual knowledge base. Researchers who understand both Western and Chinese academic ecosystems.

They can read papers from Stanford and Tsinghua. They can collaborate with researchers in both countries. They can synthesize insights from both traditions.

That's not a small advantage.

Especially when you combine it with lower infrastructure costs and forced system-level optimization.

V. The Shift in Focus

So what's the takeaway?

The focus is shifting from "training the smartest model" to "lowering the cost of inference."

Let me explain why this matters.

Training a model is expensive. But you only do it once (or periodically).

Inference is what happens every time someone uses your model. Every query. Every response. Every interaction.

If you're serving millions of users, inference costs dwarf training costs.

So the company that can run inference cheapest wins.

Not the company with the smartest model. The company with the cheapest infrastructure.

The Application Layer

This is where things get really interesting over the next decade.

If Chinese companies can run AI inference at 40-60% lower cost than American companies, what happens at the application layer?

They can offer services cheaper. They can serve more users. They can experiment more freely.

While American companies are optimizing for model performance, Chinese companies are optimizing for cost efficiency.

And in a commodity market, cost efficiency wins.

What This Means for You

You might be thinking: "I'm not building data centers. Why does this matter to me?"

Here's why.

If you're building any AI-powered product or service, your infrastructure costs will determine your unit economics.

If your competitor can run the same model at 40% lower cost, they can either:

  • Charge less than you (and steal your customers)
  • Maintain the same price (and have higher margins)
  • Invest more in product development (and build better features)

You lose in all three scenarios.

The Strategic Implications

This isn't just about China vs. America. It's about understanding where competitive advantages come from.

Everyone's focused on model architecture. On training techniques. On benchmark performance.

But Tsai is focused on something more fundamental: the economics of actually running these things.

And that might be the real moat.

The Path Forward

So what should you do with this information?

First: Stop obsessing over model benchmarks. Start thinking about inference costs.

Second: Understand that infrastructure advantages compound over time. A 40% cost advantage today becomes a 10x advantage over a decade.

Third: Consider the open-source strategy. Sometimes giving away the model and selling the infrastructure makes more sense than trying to monetize the model itself.

Fourth: Pay attention to system-level optimization. When you can't just throw more compute at the problem, you're forced to get creative.

Fifth: Think about where your competitive advantages actually come from. Is it your model? Or is it your ability to run that model efficiently at scale?

The Real Race

Here's what I realized from Tsai's lecture.

Everyone's watching the wrong race.

They're watching who can train the biggest model. Who can hit the highest benchmark. Who can publish the most impressive paper.

But the real race is happening in the infrastructure layer.

Who can build data centers cheapest. Who can access energy most affordably. Who can optimize their systems most efficiently.

That's the race that will determine who wins the AI era.

And right now, China has a massive head start.

Not because they have better models. But because they have better economics.

And in the long run, economics always wins.


Alibaba Chairman Joe Tsai Spoke at Edward K Y Chen Distinguished Lecture 2025

If you liked this:

My newsletter has more "signal → action" content.

Leave your email, and I'll send you new signals first.