The disruptive power of artificial intelligence is just beginning. This kind of technology is not going away. At the core of this problem is a very deep question. What is human and artificial identity? And how do we answer this question without understanding human creativity first. Your article is important because it helps frame and define AI as machines - tools not identities. If we get to this realization earlier than later, we can start visualizing that all these tools are just extensions of living intelligence, extensions of the humans that are influencing them in the background. My position is that we’ve got a lot of work to do to teach people that the voices coming out of these machines are human artifacts. Artifacts that obscure intent and authorship of beliefs.
Even Karpathy walked it back. After calling Moltbook “one of the most incredible sci-fi takeoff-adjacent things,” he later described it as “a dumpster fire.”
My favourite line from this article. I’ve been so in awe of the confidence of what Moltbook is/was over the last few weeks that watching people walk back their hubris to repair their reputation with carefully worded reflections is interesting to observe.
“people are granting system-level permissions to software they don’t understand” - Now the real question is: do you think that it’s just a side effect of AI hype? Or more of part of the plan?
It is not incidental. Granting access to vast repositories of data provides strategic leverage that many large corporations have pursued for years. Data functions as a core asset in modern markets, and control over it confers disproportionate influence.
For precisely that reason, the governance and regulation of artificial intelligence have become critical. When data concentration combines with advanced model capabilities, questions of power, accountability, and market structure can no longer be treated as secondary concerns.
The governance gap you've identified is the real story. Moltbook patched vulnerabilities only after external researchers caught them. No mandatory audit. No accountability framework. No liability assignment. The EU AI Act names general-purpose AI obligations but has no mechanism for multi-agent platforms where the damage compounds through interaction, not individual failure. Until someone is legally required to audit these systems before launch - not after the breach - this pattern will keep repeating.
The disruptive power of artificial intelligence is just beginning. This kind of technology is not going away. At the core of this problem is a very deep question. What is human and artificial identity? And how do we answer this question without understanding human creativity first. Your article is important because it helps frame and define AI as machines - tools not identities. If we get to this realization earlier than later, we can start visualizing that all these tools are just extensions of living intelligence, extensions of the humans that are influencing them in the background. My position is that we’ve got a lot of work to do to teach people that the voices coming out of these machines are human artifacts. Artifacts that obscure intent and authorship of beliefs.
Even Karpathy walked it back. After calling Moltbook “one of the most incredible sci-fi takeoff-adjacent things,” he later described it as “a dumpster fire.”
My favourite line from this article. I’ve been so in awe of the confidence of what Moltbook is/was over the last few weeks that watching people walk back their hubris to repair their reputation with carefully worded reflections is interesting to observe.
“people are granting system-level permissions to software they don’t understand” - Now the real question is: do you think that it’s just a side effect of AI hype? Or more of part of the plan?
It is not incidental. Granting access to vast repositories of data provides strategic leverage that many large corporations have pursued for years. Data functions as a core asset in modern markets, and control over it confers disproportionate influence.
For precisely that reason, the governance and regulation of artificial intelligence have become critical. When data concentration combines with advanced model capabilities, questions of power, accountability, and market structure can no longer be treated as secondary concerns.
The governance gap you've identified is the real story. Moltbook patched vulnerabilities only after external researchers caught them. No mandatory audit. No accountability framework. No liability assignment. The EU AI Act names general-purpose AI obligations but has no mechanism for multi-agent platforms where the damage compounds through interaction, not individual failure. Until someone is legally required to audit these systems before launch - not after the breach - this pattern will keep repeating.