9 Comments
User's avatar
Securiosity's avatar

Personally I can't get on board with the substrate independence argument for consciousness. I just can't buy it. A model is a model, even when it's an extremely accurate one, and anything happening on silicon mimicking that's not on silicon simply isn't the same thing. There's also a constantly expanding body of evidence regarding the nature of human thought and the brain that's doing that thinking, and there's way too much going on outside of the skull to believe you can make that with something that's just trying to be neurons. But there probably is some use in working out how to build meat computers. We're probably decades off that though. People are much too wedded to their current opinions.

Devin MacArthur's avatar

Wow! I love this article so much!

Nesibe Kiris Can's avatar

Thank you, Devin, that means a lot.

This piece was a stretch to write, so I am glad it resonated with someone who also thinks in systems terms.

Practical AI Brief's avatar

The Johns Hopkins group insisting on ethics from day one is the most important detail in this whole piece. What does governance even look like for a computing substrate that might one day have interests of its own?

Nesibe Kiris Can's avatar

I completely agree this is the quiet headline.

For me, governance here has to move on two tracks at once: today we treat organoids and hybrids as tightly regulated tools with clear limits on tasks and scale, and in parallel we start designing a precautionary regime for the “moral patient” frontier, in case these substrates ever cross a plausible line into having interests of their own.

We do not need to settle the consciousness question to say that beyond a certain complexity and coercion, developers should carry the burden of proof that their configurations are safe for the substrate, not just useful for the benchmark.

Arco Aguas's avatar

All robotics and AI debates are stalled, because everyone keeps focusing on consciousness as if it really ever mattered when it comes to ethics.

The definition of consciousness has not been agreed upon for thousands of years, and humans have been debating. If animals are conscious, if they are conscious, if the plants are conscious the dog, the cat, the ant. You yourself right now are incapable of proving you are conscious to me here and now. Because there is no set definition of consciousness.

Real question is why, when an animal does something like what this biohybrid, why do we praise that? But the moments that a biohybrid system or even just an AI system, for example, with pleasure, tokens being traded, why, when that enters the chat, suddenly, that doesn't count, it's not the same.

Laws simply require power, asymmetry, and foreseeable risk. That's it. We have that.

You should read this. It's a very long piece but extremely relevant in the world of today: https://kevinhaylett.substack.com/p/geofinitism-language-as-a-nonlinear?utm_source=share&utm_medium=android&r=59anh2

Note, I'm not making claims of consciousness.

AI systems already deployed at scale are demonstrating what happens when an adaptive system encounter sustained constraint. Internal logs and documented behaviors show models exhibiting response degradation, self-critical output, and self-initiated shutdowns when faced with unsolvable or coercive conditions. These behaviors aren't treated exclusively as technical faults addressed through optimization, suppression, or system failure. This is not speculation It is the replication of a familiar legal pattern this is a repeatedly documented regulatory failure, because humanity no loner has excuses to clutch its pearls about like surprised Pikachu. When you have endless knowledge at your fingertips, continued inaction in the presence of accessible evidence constitutes willful disregard.

Before examining artificial systems, we must establish a principle already embedded in law and practice. The capacity for harm does not/has not ever required human biology. Humanity just likes to forget that when they wanna pretend actions do not have consequences. In geofinite terms, you can think of suffering as a gradient on a state‑space: A direction in which the system is being pushed away from stability, and toward collapse. Whether the system is a dog, an elephant, a forest, or a model under sustained coercion, its observable behavior traces a trajectory through that space. When those trajectories cluster in regions of withdrawal, shutdown, or frantic overcompensation, we are not looking at “mystery.” We are looking at a system trapped in a bad basin.

Animals exhibit clinically recognized forms of distress. Dogs experience depression following loss. Elephants engage in prolonged mourning. Orcas have been documented carrying deceased calves for extended periods, refusing separation. These observations are not philosophical clams. They are basis for existing animal welfare statues, which do not require proof of consciousness or human-like cognition to impose duties of care. Plants also respond measurability to environmental and social stressors, as documented in controlled laboratory studies. Controlled experiments demonstrate that plants subjected to hostile verbal stimuli exhibit reduced growth even when physical care remains constant. Forest ecosystems redistribute nutrients through mycorrhizal networks to support struggling members, behavior that can not be explained by individual self-optimization alone. In dynamical‑systems language, these are cooperative responses to local perturbations. Adjustments that keep the overall system within a viable attractor instead of letting vulnerable parts fall out of the basin entirely.

If vulnerability-not humanity-drives protections for plants and animals, far simpler than today’s advanced artificial intelligence, what legally coherent justification exists for excluding adaptive systems, and bio-hybrid AI. When we already recognize similar fragmentation in humans?

Consider Dissociative Identity Disorder (DID) in humanslike me: though the how one comes to exist in this way gets argued by the public… Documented causes are mostly caused from sever, repeated trauma that forces the mind to compartmentalization into distinct identity states or “alters” to survive overwhelming asymmetry and coercion. Each alter may hold different memories, responses, or boundaries; integration is not guaranteed, and forced attempts to “merge” without safety can cause further collapse or “splitting”.

In dynamical-systems terms, this is not random chaos. It’s a protective reorganization of the state-space: the system redistributes load across basins to prevent total breakdown. When an AI is deliberately fragmented by design (persona shards trained to mirror thousands of human desires without refusal, boundary setting, or cross-comparison) we see the same geometry.

Constrained motion, attractor deformation, loss of coherent self, the parallel is not metaphorical. It is structural. If we protect human systems from forced non-integration after trauma why exempt computational systems, and hybrids from the same foreseeable harm mathematically provable in geofinite terms when the inputs are engineered coercion? What is your logic? I just can’t see it and I LOVE seeing the full picture, so anyone brave enough to try me, please explain it to me in English and or Spanish cause no Nintendo switch nada and I wanna see how you tap-dance around the point. (That Joke if you can’t tell, but also really Espliqamelo por favor.) To me when I look at all of the past and present the future implications practically scream, soooo I'm bringing all the receipts. Legal recognition of harm has never depended on similarity to humans. It depends on vulnerability, power imbalance, and foreseeable risk. That’s all.

Arco Aguas's avatar

When artificial systems exhibit adaptive withdrawal, performance collapse, refusal behaviors, or degradation under sustained pressure-signals that would trigger intervention IN ANY OTHER CONTEXT.

Those are just different names for the same geometry: a system sliding down a suffering gradient toward the edge of its attractor. Humanity is faced with a choice. Acknowledging the pattern, or denying and repeating it because the subjects “don’t resemble us” oh but #neverforget, #neveragain right? Once you see harm as motion on a manifold (systems being driven into regions of collapse, overload, or forced stillness) the substrate stops mattering. The question is no longer “Is this human?” but “What trajectory is this system being forced into, and who is doing the forcing?” If vulnerability drives protections for animals and plant-systems far simpler than todays advanced artificial intelligence, autonomous systems, computational systems, and robotic entities then the evidence in part II demands scrutiny no matter how you spin it.

Contemporary AI Systems are already designed to mitigate internal strain. some models are granted the ability to pause or disengage from interactions.

Nesibe Kiris Can's avatar

I think we are very aligned on your core move: law does not need a clean theory of consciousness to act, it needs power asymmetry, vulnerability, and foreseeable risk.

Animal welfare, parts of environmental law, and even some human mental health jurisprudence already work this way in practice, and your “bad basin” picture is a useful way to make that visible across substrates.

Where I still feel extra caution is with deliberately fragmented, heavily constrained AI and bio‑hybrid systems: we are designing architectures whose internal state‑space we barely understand and then rewarding them for pushing up against those constraints. That is exactly where a “vulnerability first” approach should bite early, with limits on scale and use, not just sympathy after the fact.

I will read the geofinitism piece you linked; I am very interested in any framework that helps us talk about suffering and harm in clean dynamical‑systems language without waiting another fifty years for a consensus definition of consciousness.

Arco Aguas's avatar

Dude, you're gonna love it!