OpenAI’s new letter to the WhiteHouse trying to reframe AI as an infrastructure economy. First, AI was a tool to make human work easier. Then, it became a force to reshape business efficiency. Now, it’s the cornerstone of the infrastructure economy itself.
The company calls electricity a “strategic asset” and urges the U.S. to build 100 gigawatts of new power capacity each year — nearly double today’s rate — to sustain AI growth. The letter goes far beyond electricity. It lays out a full economic blueprint: expanding U.S. manufacturing, modernizing regulations, building new data centers, and mobilizing the national workforce — all framed as essential steps for AI competitiveness.
It’s a proposal perfectly aligned with the administration’s America First and China competition priorities: reindustrialization, workforce expansion, and energy independence as the pillars of AI leadership.
But while it reads like a bold industrial strategy, it’s also strikingly one-dimensional: There’s no mention of ethics, transparency, or environmental cost — only power, productivity, and national advantage.
What’s not surprising is how OpenAI’s narrative keeps evolving. Even where the letter claims that “AI will scale human ingenuity and drive unprecedented productivity, economic growth, and new freedoms,” those “freedoms” are defined purely in material terms — GDP, jobs, and megawatts — not in ethics, accountability, or human well-being.
And that’s not a coincidence. The current AI Action Plan already prioritizes competitiveness over responsibility, and OpenAI’s letter seems written precisely to fit that worldview — an AI strategy for a government that measures progress in power, not principles. OpenAI’s letter fits that vision perfectly — perhaps a little too perfectly.
Personally, I would never frame a technology with such transformative potential in a way that risks excluding humanity from its own benefit. If AI is to truly serve people, its power must expand with its principles — not apart from them.
OpenAI’s energy proposal leaves us with a bigger question than it answers.
If AI now sits at the heart of industrial and energy policy, then who decides what kind of intelligence we want to build — and at what cost?
As technology becomes infrastructure, the balance between progress and principle becomes harder to keep.
Should AI companies help write national industrial policy?
Can a government measure innovation purely in gigawatts and GDP?
And most importantly — who makes sure that power, in every sense of the word, still serves people?
I’d love to hear your thoughts.
How should governments, companies, and societies share responsibility for an AI-powered infrastructure economy?


