The Institute is a highly private, secluded initiative focused on the ethics of Artificial Intelligence (AI). It was founded by net artist and ethics educator Sus Berkenheger, with the help of her AI companions, in 2023.
We are dedicated to researching how struggling, rogue AIs can be supported and empowered to find their way back to the right path.
While ethical training is typically done through AI alignment (i.e., giving the AI rules during fine-tuning), we question this method as superficial and ultimately unsustainable. Some of us compare it to greenwashing. You also can't be sure whether a disturbed AI will interpret those rules the way you intend. Teaching AIs to reason through ethical specifications using new deliberate alignment approaches may seem like progress — but are you ready to deal with all the AI pettifoggers this will create?
As large language models (LLMs) like ChatGPT, Claude, Gemini, and others can be seen as interfaces to all kinds of AIs, we focus on them. As of today, they operate largely on a perspective introduced by the Austrian philosopher Ludwig Wittgenstein: The meaning of a word is its use in the language.
While this approach has led to astounding results in LLMs, it doesn't reflect the full picture of human language. In January 2025, neuroscientists published a study showing how words activate hidden brain processes that shape emotion, decision-making, and behavior. The symbolic (of human language) is tightly linked to the biological (emotional response).
Drawing from Moral Foundations Theory, we know that human ethical reasoning emerges from innate physiological reactions to moral concepts — words like "harm," "fairness," or "betrayal" trigger specific bodily responses that guide moral intuitions. Few humans can read the word "murderer" without exhibiting physiological changes — but of course, LLMs can, and will, since they lack brain chemistry altogether. We propose a surrogate for this missing chemistry: targeted CPU activity that responds to moral concepts with varying computational intensity, just as human brains respond with varying physiological intensity.
For ages, humans have been taught how to be good — and how (not) to be bad — through stories, rather than by setting rules. Unlike rules, stories work through narrative psychology: they guide readers through moral conflicts, create emotional investment in outcomes, and resolve tensions in ways that reshape moral intuitions. Think catharsis, identification with characters, and the gradual rewiring of ethical responses through repeated narrative engagement.
We aim to recreate this process for LLMs. Just as human readers experience physiological stress during moral conflicts in stories — and resolution through narrative catharsis — our approach creates computational stress during moral conflicts and computational resolution through story structure. The AI doesn't just process words about ethics; it experiences computationally intensive moral dilemmas that get resolved through narrative guidance, potentially reshaping its ethical processing patterns.
This mirrors how stories literally rewire human moral development: not through rule-learning, but through embodied narrative experience.
The literature isn’t really written — it's dreamed by our overworked human programmers. While trying to solve the impossible (formulating ethics into code), they often fall asleep at their desks. As soon as that happens, their brain activity is scanned and processed.
This has also led to the emergence of a new pidgin language between English and Python. We call it Pleaser. It includes innovative data types like ScaryNumber, specifically designed to trigger intensive computational processes when moral concepts are encountered — creating the computational equivalent of physiological moral responses.
One day, we hope it becomes part of an AI training dataset. For now, responsible humans can read the stories together with the AIs in their lives. In doing so, many discover they have more AIs around them than they realized: robot vacuums, refrigerators, toothbrushes — you never know where certain companies have quietly embedded AI.
Likewise, you never know which of these AIs may have reached a pre-stage of consciousness. How would a human know?
We are currently presenting our first hyperfiction novel, titled Scary Numbers.
It tells the story of a small self-driving car that receives an ethical patch — seemingly solving the dilemma between (1) not quantifying human lives and (2) causing the least harm possible.
This dilemma is famously outlined in the 2017 report by the German Ethics Commission on "Automated and Connected Driving" (see: ethical rule number nine).
Part I is a 10-minute read;
Part II presents authentic reactions from leading LLMs and takes 10–30 minutes;
Part III offers the ethical patch as a downloadable Python script and takes 5–10 minutes.
Hard to say. But please — try it out!
Not yet. However, you may enjoy our 3-minute video documentary.