4 Comments
User's avatar
Johannes Miertschischk's avatar

The Murder of an OpenAI Top Engineer and the True Dangers of Artificial Intelligence:

On November 22, 2024, 26-year-old former OpenAI engineer Suchir Balaji was brutally murdered in his San Francisco apartment.

Authorities ruled his death a suicide.

Suchir Balaji was a brilliant American IT engineer of Indian descent.

At the age of 22, he was hired by OpenAI as a top talent and played a key role in the development of ChatGPT.

In addition to his exceptional intelligence, he possessed a strong sense of justice and unwavering ethical principles.

It is therefore not surprising that he disagreed with the behavior of his boss, Sam Altman, and OpenAI's business practices. He developed an increasingly critical attitude toward management and his boss.

Sam Altman is notorious within the company for his lies and power plays. Suchir Balaji had absolutely no understanding for this and was ultimately quite disgusted by his behavior.

He also witnessed OpenAI's transformation from a non-profit, open-source project into a for-profit, closed-source company.

It's important to understand that the development of ChatGPT was only possible by feeding and training the AI ​​with gigantic amounts of data, including vast quantities of copyrighted material.

OpenAI was only able to use this data free of charge and without the permission of the copyright holders because the company presented itself as a non-profit project.

The use of copyrighted material is considered permissible if it is a research project that does not generate profits and serves the public good.

In retrospect, it is clear that OpenAI deliberately exploited this situation. The billions in profits the company now generates are largely due to OpenAI's free access to this data during its non-profit phase.

For Suchir Balaji, this practice was completely unacceptable.

Suchir left the company in the summer of 2024, having made crucial contributions to the development of ChatGPT during his four years there.

In the months leading up to his violent death, he was preparing to launch his own startup and wrote a scientific paper on the future of large language models (LLMs) like ChatGPT.

In this work, which unfortunately remained unfinished, he refuted the so-called scaling hypothesis, championed by OpenAI and most other AI companies.

This hypothesis states that the intelligence of AI models can be developed indefinitely as long as they are fed enough data. It forms the basis for the grandiose promises of AI companies.

The achievement of a level of artificial general intelligence (AGI) has been announced for years.

AI models are supposedly about to develop superhuman intelligence (ASI = Artificial Super Intelligence), replace all kinds of jobs, cure diseases, create wealth for everyone, and so on.

In his unfinished essay, Suchir Balaji demonstrated in an impressive yet easily understandable way that, contrary to the claims of AI companies, large language models can never reach the level of human-like intelligence (AGI = Artificial General Intelligence).

He predicted that the fundamentally limited, abysmal data efficiency of this technology will inevitably slow down the further development of AI models and bring them to a standstill long before AGI is achieved.

This is an inconvenient truth for the AI ​​industry, which it is trying to conceal to protect its business model.

Suchir Balaji was also slated to testify as a key witness in a lawsuit against OpenAI, which involved, among other things, massive copyright infringements.

In the months leading up to his death, Suchir was in good spirits and looking forward to launching his own AI company.

On November 22, 2024, he had just returned from a short vacation with his closest friends.

According to the investigation by a private investigator hired by Suchir's parents, Suchir had ordered food that evening, listened to music, and worked on his laptop. According to the investigator's reconstruction, he ...

Read the full article for free on Substack:

https://truthwillhealyoulea.substack.com/p/the-murder-of-an-openai-top-engineer?utm_source=share&utm_medium=android&r=4a0c9v

Gerald Trucker G Johnson's avatar

This is a solid articulation of the autonomy shift.

Where I think the conversation now needs to move is from “governance maturity” to enforcement architecture.

The gap isn’t just that organizations are still operating with Level 1 frameworks while deploying Level 2–3 agents.

The gap is that most governance remains descriptive rather than executable.

Policies exist. Risk registers exist. Monitoring exists.

But at the moment an agent mutates state — writes to a database, triggers a workflow, transfers value, allocates access — very few systems have a deterministic enforcement gate that can allow, deny, or halt that execution with evidentiary traceability.

That binding event is where governance either proves itself or collapses.

Agentic AI doesn’t just expand the risk surface.

It forces us to encode authority and stop conditions at runtime, not assume them at Layer 8.

Until enforcement is infrastructure — not documentation — the gap will continue widening regardless of how many frameworks we publish.

Robert F. Tjón's avatar

We share the same interests, take a peek on rftjon.substack.com

Oban Cameron's avatar

Ai alone is not the solution. Ai is just another system, not the entire system. It needs a translation layer. Check out my pov on it that I have in my notes and articles. Might be of interest, might not.