11 Comments
User's avatar
Michael Angelo Truncale's avatar

The disruptive power of artificial intelligence is just beginning. This kind of technology is not going away. At the core of this problem is a very deep question. What is human and artificial identity? And how do we answer this question without understanding human creativity first. Your article is important because it helps frame and define AI as machines - tools not identities. If we get to this realization earlier than later, we can start visualizing that all these tools are just extensions of living intelligence, extensions of the humans that are influencing them in the background. My position is that we’ve got a lot of work to do to teach people that the voices coming out of these machines are human artifacts. Artifacts that obscure intent and authorship of beliefs.

Nesibe Kiris Can's avatar

I really appreciate how you framed this.

I agree that we will keep going in circles if we try to answer “what is artificial identity?” without first being honest about how human creativity, judgment, and power sit behind these systems.

Treating AI as tools and “human artifacts” is not a way of minimising them, it is a way of keeping authorship and intent visible so that responsibility has somewhere to land.

Michael Angelo Truncale's avatar

Thank you. I'm kind of obsessed about this topic. I've been slowly approaching some kind of conclusion that meaning and language are real different things – that these systems are simply language systems faking meaning.

Anything meaningful coming from these models are because of either two things, a human has prompted and provided context well enough so that their idea is given structure and clarified, and the other is that the systems are faking representations of understanding. It's very revealing that they don't have much to say about feelings or emotional content. They fail miserably if you ask them to tell a story about interpersonal relationships or to explain philosophical paradoxes. We have to constantly correct them and re-orient them.

Meaning is living intelligence generated through time in humans. Language is something structural, symbol manipulation, representations of representations. These systems look at a corpus of data to produce language, humans look at their actual physical state from within where real feelings and real imagination are the foundations of their meaningful descriptions of reality.

Well, it's easy to digress into deep philosophy, but these systems are forcing these conversations.

I enjoy your techletter. Keep up the good work.

The Strategic Linguist's avatar

Even Karpathy walked it back. After calling Moltbook “one of the most incredible sci-fi takeoff-adjacent things,” he later described it as “a dumpster fire.”

My favourite line from this article. I’ve been so in awe of the confidence of what Moltbook is/was over the last few weeks that watching people walk back their hubris to repair their reputation with carefully worded reflections is interesting to observe.

Nesibe Kiris Can's avatar

Yes, that arc around Moltbook was very revealing.

I am less interested in one person’s walk‑back and more in what it shows about our collective incentives to perform certainty about systems we barely understand, then quietly rewrite the narrative once the flaws are undeniable.

Governance has to assume that kind of reputational choreography will keep happening and design for it, not expect everyone to be humble and cautious from the start.

Kseniia Korostelova's avatar

“people are granting system-level permissions to software they don’t understand” - Now the real question is: do you think that it’s just a side effect of AI hype? Or more of part of the plan?

Nesibe Kiris Can's avatar

Great question.

I think it is both: there is genuine hype‑driven naivety at the edge, and there is also a very deliberate pattern where products are designed to normalise system‑level permissions as the default.

Either way, the effect is the same for governance: you cannot rely on “user consent” alone when the economic model depends on people granting access they do not fully understand.

Eleanna Kalaitzi's avatar

It is not incidental. Granting access to vast repositories of data provides strategic leverage that many large corporations have pursued for years. Data functions as a core asset in modern markets, and control over it confers disproportionate influence.

For precisely that reason, the governance and regulation of artificial intelligence have become critical. When data concentration combines with advanced model capabilities, questions of power, accountability, and market structure can no longer be treated as secondary concerns.

Nesibe Kiris Can's avatar

I completely agree with you.

Treating this as an accident misses the point that data concentration and broad access rights have been a strategic goal for years, long before Moltbook.

Once that level of data control meets agentic, networked models, questions of power, accountability, and market structure move from the footnotes to the main text of AI governance.

Violeta Klein, CISSP, CEFA's avatar

The governance gap you've identified is the real story. Moltbook patched vulnerabilities only after external researchers caught them. No mandatory audit. No accountability framework. No liability assignment. The EU AI Act names general-purpose AI obligations but has no mechanism for multi-agent platforms where the damage compounds through interaction, not individual failure. Until someone is legally required to audit these systems before launch - not after the breach - this pattern will keep repeating.

Nesibe Kiris Can's avatar

This captures the gap very clearly.

As long as we rely on voluntary patching after external researchers raise the alarm, we are not really governing multi‑agent platforms, we are just reacting to them.

You are right that current frameworks, including the EU AI Act, still have a blind spot where damage emerges from interaction effects rather than a single model or provider, and pre‑deployment auditing with clear liability is exactly where that needs to be tightened.