The Week AI Governance Stopped Looking Like One Thing: Hangzhou, Karlsruhe, Oakland, and Colorado
How AI governance stopped being a framework conversation in spring 2026
Hello everyone. I have been keeping a small list this week, in the margins of my notebook, and it turned into the spine of this letter. Each item on the list is a place where, in the past seven days, an institution made a binding decision about AI. Looking at them together, what struck me is that the institutions are not the ones I expected, the venues are not the ones we usually talk about, and the directions they pull in are not the same.
Here is the list, more or less in the order I came across it.
A judge in Hangzhou ruled that a tech firm cannot fire a worker because it replaced him with a model.
A prosecutor in Karlsruhe filed Europe’s first criminal indictment for AI-generated child sexual abuse material.
A jury in Oakland heard Elon Musk testify for three days about whether OpenAI’s for-profit conversion amounts to “stealing a charity.”
The Pentagon signed AI deals with eight major vendors for classified networks, and left Anthropic off the list.
The Academy of Motion Picture Arts and Sciences ruled that only “human-authored” screenplays and performances “demonstrably performed by humans” qualify for an Oscar.
Colorado’s senate leaders introduced a bill to repeal the most ambitious state AI law in the United States, two months before it was supposed to take effect.
The UK government quietly began rolling out a Google-built AI tool called Extract to help councils make planning decisions.
Seven decisions, five jurisdictions, one week. And the most useful thing I can say about the list is that it does not point in a single direction.
What ended this week: the era of AI governance as a writing project
In the Davos 2026 recap I wrote in January, I argued that AI was moving from pilots to infrastructure. Four months on, the more honest framing is that it is now moving from infrastructure to enforcement, but enforcement is plural, and the plurality is not an accident. Different institutions reach for AI in their own grammars when the people they serve start asking them to.
Three courtrooms, three legal domains: Hangzhou, Karlsruhe, Oakland
In a single week, three different judicial logics produced three different kinds of AI governance, in three jurisdictions that rarely show up in the same paragraph.
The Hangzhou ruling came from the Intermediate People’s Court, in a case where a quality assurance worker was fired after refusing a 40% pay cut tied to the automation of his job.
The court held that AI implementation does not, on its own, meet the legal standard for terminating an employee, and the decision builds on a December 2025 Chinese precedent in a mapping company case.
The interesting thing about it, in my reading, is the venue. China is not a jurisdiction we usually associate with worker-protective rulings, and the fact that this one happened there says something about how labor courts are quietly becoming an AI governance frontier.
The Karlsruhe indictment is the first criminal case in Europe for AI-generated CSAM. NCMEC tracking shows reported cases of AI-generated child sexual abuse material moving from a few thousand in 2023 to roughly 1.5 million in 2026, and the European Parliament is now debating an amendment to the AI Act that would criminalize the creation, fine-tuning, and distribution of models capable of producing such material.
When I wrote about Grok’s bikini outputs and the broader deepfake question in January, I argued that 2026 would be the year governance got real about generative content harms. Karlsruhe is what “getting real” actually looks like in practice: an indictment, a courtroom, a defendant.
The Oakland case is the one most likely to set the tone for the second half of the year. Musk v. Altman opened on April 28, and Musk himself testified for three days, calling OpenAI’s for-profit conversion a theft of a charity and seeking up to $134 billion in damages in earlier filings, with Altman and Brockman expected to testify later this month. On the surface, this is a corporate dispute over a governance structure. Underneath, it is a fight about who has the authority to decide what an AGI race should look like and on what terms, and that fight has been brewing for nearly a decade.
If you want to understand what the Oakland trial is actually about, the most useful book I have read in the past year is Karen Hao’s Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.
Hao spent seven years covering OpenAI for MIT Technology Review and beyond, and the book is built on roughly 260 interviews. It traces the entire arc the courtroom is now trying to litigate: the founding non-profit promise, the Musk-Altman power struggle that ultimately pushed Musk out, the formation of Anthropic by Dario and Daniela Amodei and other senior staff who left over safety disagreements, the boardroom drama that briefly ousted Altman in November 2023, and the resource extraction story underneath all of it.
The same Musk who appeared at Davos this January talking about AGI timelines is now under cross-examination over the corporate maneuver he believes betrayed those timelines.
Hao’s long-form interview on The Diary of a CEO is the audio companion. I would recommend both, especially if you have followed the dispute only through the legal filings. (One footnote that matters for a publication like this one: Hao publicly acknowledged in November 2025 that the book overstates a Chilean data centre’s water usage by a factor of 1,000 due to a unit error. The broader argument stands; that figure does not.)
The reason I am dwelling on Oakland is that the trial is, in a sense, the first formal venue where Hao’s central thesis is being tested under oath. She argues that the “AGI for the benefit of all humanity” mission, sincere or not at the start, became a uniquely potent formula for consolidating resources and constructing an empire-style power structure. Whether or not Musk wins, the questions his lawsuit puts on the record (about charitable purpose, about board fiduciary duty, about how an AGI mission can be repurposed to justify almost any organizational form) are now in the legal system. Being in a book and being in a court record are very different things.
Procurement is moving faster than legislation
This is where Pentagon procurement comes in, because it is the cleanest example of policy by purchase order.
The Pentagon’s contract list for classified-network AI includes SpaceX, OpenAI, Google, Nvidia, Reflection AI, Microsoft, AWS, and Oracle.
Anthropic was not on the list. The reason, reportedly, is that the company’s red lines on mass surveillance and fully autonomous weapons were treated as a supply-chain risk.
When I wrote about the Anthropic-Pentagon conflict in March, the question I kept turning over was whether “all lawful purposes” was a workable governance frame for federal AI vendors. The answer this week is: apparently not, if a “lawful purpose” includes things you would not personally write into a model card. What makes it stranger is that the NSA had reportedly given a positive technical review of Anthropic’s Mythos model around the same time, which means the same vendor is being read as both qualified and disqualified by adjacent parts of the same state. Procurement-as-governance has that quality. It does not need internal coherence to function.
Three more procurement variants came into focus this spring, and the contrasts between them are their own story:
GSA (federal United States). The General Services Administration released its draft AI procurement clause GSAR 552.239-7001 on March 6, requiring AI systems to be “ideologically neutral,” to be developed and produced in the United States, and to refrain from using federal data to train models for other customers. Holland & Knight called the proposal among the most prescriptive ever seen in federal contracting.
California (state United States). Governor Newsom responded with Executive Order N-5-26 on March 30, directing state contracting processes to require vendors to demonstrate safeguards against harmful bias and protections for civil rights. The federal “no dogmas” framing and the state “civil rights” framing now compete for the same vendors, and California’s economy is large enough to make that a real fight rather than a symbolic one. The order was deliberately drafted to fit inside the procurement carve-out the Trump administration left open in its March 20 National Policy Framework on AI, which means the state is governing AI by exercising the one power Washington has not yet tried to take from it.
United Kingdom. The Department for Science, Innovation and Technology awarded Google Cloud an £8.3 million contract to build Extract, a system that helps council planning officers process applications using Gemini. Pilots have run in Hillingdon, Westminster, Nuneaton & Bedworth, and Exeter, with national rollout expected this spring. The state here is not deciding which vendors are eligible to sell AI; it is the customer, and what gets decided by the model is whether someone can extend a kitchen or build a house.
If I had to summarise the pattern across these four cases, the strongest tool of AI governance right now is a contract clause, not a regulation. Rules are slow and contested. Procurement happens in the same week as the policy decision behind it.
Colorado’s collapse: the laboratory that closed before the experiment
While enforcement was hardening in Pentagon contracts and Oakland courtrooms, the most ambitious AI law in the United States was being dismantled in a state capital that very few people outside the policy world were watching.
Colorado passed the Colorado Artificial Intelligence Act in May 2024, modelled in part on the EU AI Act, with bias audits, impact assessments, risk management programs, and incident reporting requirements for high-risk systems. It was supposed to take effect on June 30, 2026. The collapse happened on a remarkably tight timeline:
April 27, 2026: a federal court paused enforcement.
May 1, 2026: Senate President James Coleman and Majority Leader Robert Rodriguez introduced a bill to repeal and replace the law.
The replacement framework, “Concerning the Use of Automated Decision Making Technology in Consequential Decisions,” shifts to transparency, recordkeeping, and consumer rights, dropping the bias audits, impact assessments, and risk management requirements that defined the original.
If passed, the new framework takes effect January 1, 2027.
Witnessed in Stanford AI Index 2026 last month, capability was outrunning every system around it, including the legal one. Colorado is the cleanest illustration I have seen so far. The state was supposed to be the laboratory that proved comprehensive state-level AI regulation could work in the U.S., and the laboratory closed before its first experiment. Other states have already pulled back, with California’s broader bill slowed last year and Connecticut’s failing under veto threat. If even Colorado could not hold the line, the credibility of the EU AI Act’s high-risk obligations, which trigger on August 2, is going to be tested by industry in ways the law’s drafters probably did not plan for.
Two unexpected venues: the Academy and the Bundesbank
The Academy of Motion Picture Arts and Sciences released new Oscars rules requiring “human-authored” screenplays and performances “credited in the film’s legal billing and demonstrably performed by humans with their consent” for the 99th Academy Awards. The Academy is a private cultural institution, not a regulator, and yet its eligibility rules now function as a labor protection for screenwriters and performers and an authorship doctrine. Where state regulators have been slow to address AI’s impact on creative labor, the cultural body that hands out the year’s most visible award stepped in and did the work itself.
In Türkiye, the same week, I saw a parallel controversy when allegations surfaced that the band performing at Mustafa Sandal's Saygı 1 tribute concert had used AI-generated vocals rather than live performance; the debate played out among musicianson social media, with no institutional body in a position to draw the line.
The Bundesbank, joined by Australia’s banking regulator, asked the European Commission for technical access to frontier model Mythos, warning that without it banks could not understand the systemic risks they are exposed to. Frontier AI is now, in addition to whatever else it is, a financial-stability concern, and that frame changes who gets to ask the hard questions. Capital flows are being drawn into AI governance from two directions at once: from supervisors (Frankfurt, Sydney) and from gatekeepers (Beijing, which blocked the $2 billion Meta-Manus deal and now requires pre-approval for ByteDance and Moonshot to accept U.S. capital).
The pattern: why AI enforcement in 2026 is plural, uneven, and incoherent on purpose
Read together, the seven actions tell a more complicated story than “enforcement is finally here,” and the complication is the point.
Some actions tighten the rules. The Pentagon excludes Anthropic on supply-chain grounds, the Academy bars AI authorship from Oscar consideration, Karlsruhe files a criminal indictment, Hangzhou rules for the worker against AI-driven termination. Other actions loosen them. Colorado dismantles its own comprehensive AI law, the GSA’s “no dogmas” framing pushes back on bias-audit norms, the federal executive order targets state-level AI regulation entirely. And some actions just drift, doing AI governance without naming what they are doing. The UK adopts AI for planning decisions without any new public-law framework. California issues executive orders that depend on who occupies the governor’s office in eighteen months.
Underneath all of this, two structural features deserve more attention than they are getting:
Capacity asymmetry. The jurisdictions producing this week’s case law and procurement rules (the U.S., China, Germany, the EU) have well-resourced courts and agencies. Most of the world does not. UNESCO’s recent assessment for Georgia, where 2.2% of businesses use AI and AI publications per million people stand at 1.1 against 29.8 in the EU, captures the median condition for most jurisdictions. Speed of enforcement is becoming a function of state capacity more than of regulatory ambition, and that gap is widening.
Norm incoherence. The GSA wants ideological neutrality. California wants civil rights protections. The EU wants high-risk audits. Colorado used to want one of those and now wants none. The same model from the same vendor faces four different governance grammars in the same quarter. Compliance becomes a triangulation exercise rather than an alignment one, and triangulation tends to produce the lowest common denominator.
Four questions to watch over the next eight weeks
Will the EU AI Act’s high-risk obligations actually trigger on August 2 with industry credibility intact, given Colorado’s collapse and the broader U.S. retreat?
Will the Hangzhou logic get cited in a European labour court, and how will labour ministries handle restructurings that name AI as the reason for redundancy?
Will the Pentagon-Anthropic procurement model spread to other liberal democracies? China already uses procurement and capital approval as policy tools, and the open question is whether the U.S. and the EU formalize the move at scale.
Will the criminal-law expansion against AI-CSAM reach the AI Act amendment stage in this Parliament?
Sign-off thoughts
I am writing this from Istanbul, where AI governance is still mostly a framework conversation, and where the gap between framework writing and framework enforcement is, candidly, growing rather than narrowing. The reading from this week’s data is that the framework conversation is no longer the place where AI’s direction gets decided. The places that decide are smaller, more procedural, less photogenic, and harder to translate.
That is exactly why I think we should be paying closer attention to them, and exactly why I find this moment more analytically interesting than the declaration years that preceded it.
Tell me which of this week’s seven venues surprised you most. I am genuinely curious, and the replies often shape what I write next.
💬 Let’s Connect:
🔗 LinkedIn: [linkedin.com/in/nesibe-kiris]
🐦 Twitter/X: [@nesibekiris]
📸 Instagram: [@nesibekiris]
🔔 New here? for weekly updates on AI governance, ethics, and policy! no hype, just what matters.



