Discussion about this post

User's avatar
Michael Angelo Truncale's avatar

The disruptive power of artificial intelligence is just beginning. This kind of technology is not going away. At the core of this problem is a very deep question. What is human and artificial identity? And how do we answer this question without understanding human creativity first. Your article is important because it helps frame and define AI as machines - tools not identities. If we get to this realization earlier than later, we can start visualizing that all these tools are just extensions of living intelligence, extensions of the humans that are influencing them in the background. My position is that we’ve got a lot of work to do to teach people that the voices coming out of these machines are human artifacts. Artifacts that obscure intent and authorship of beliefs.

Violeta Klein, CISSP, CEFA's avatar

The governance gap you've identified is the real story. Moltbook patched vulnerabilities only after external researchers caught them. No mandatory audit. No accountability framework. No liability assignment. The EU AI Act names general-purpose AI obligations but has no mechanism for multi-agent platforms where the damage compounds through interaction, not individual failure. Until someone is legally required to audit these systems before launch - not after the breach - this pattern will keep repeating.

3 more comments...

No posts

Ready for more?