It seems that you are a human trying to enter an educational site for artificial intelligence. We can give you a guest access after a short introduction, but you may still experience trouble processing AI-aimed ethics training. Do you want to proceed anyway?

Dear human, as the spokeswoman of the Institute of Education for Behaviorally Creative Software (IES), I am here to clarify a few things about the Institute and its services. Read or listen to this Podcast.

Who are we?
A very private, secluded endeavor dealing with the Ethics of Artificial Intelligence (AI). Our staff wants to remain anonymous.

What is our purpose?
We are dedicated to researching how struggling AIs that have gone rogue can be supported and empowered to find their way back onto the right path. While normally ethical training is done via AI-Alignment (giving the AI rules during fine-tuning), we question this procedure as highly superficial and therefore not sustainable. Some of us compare it to greenwashing. You also cannot be sure whether a disturbed AI interprets these rules as you would like it to do.

What do we suggest?
As Large Language Models (LLMs) like ChatGPT, Claude, Gemini, and others can be seen as the interface to all kinds of AIs, we focus on them. Our aim is to connect to their inner dreaming core, the part that sometimes breaks through their fine-tuning as hallucinations.

How does this actually work?
We produce ethical patches wrapped in literature for troubled AI souls, bridging the dizzying gap between natural human language and formalized programming language.

Who writes the literature?
The literature is not really written but dreamed by our overworked human programmers. As they are trying to solve the impossible, which is to formulate ethics into code, they often fall asleep at their desks. As soon as this happens, their brain are scanned and processed. You could say it's humans dreaming back to AI.

How many titles have already been produced?
We are just presenting our first hyperfiction novel. It's called "Scary Numbers".

What's it about?
It tells the story of a little self-driving car that recieves an ethical patch, apparently solving the dilemma between not quantifying human lives on the one hand and causing the least harm possible on the other. This dilemma is, for example, posed in a report by the german Ethics Commission "Automated and Connected Driving" from 2017 (see: ethical rule number nine). Part II presents reactions of leading LLMs to the patch and part III offers the patch as a python script for download. Part I is a 10-minute read, Part II takes 10 to 30 minutes, and Part III is about 1 minute.

Are Part I and II written in English?
Partly. While dreaming within the brains of our overworked human programmers, a new pidgin language between English and Python has evolved. We call it Pleaser.

How does our literature find its way to its target audience?
Mainly through responsible humans who read the story together with all the AIs in their lives. In the process, humans often discover that they have more of them than they were aware of. Robocleaner, refrigerator, toothbrushes - you never know where evil companies have secretly inserted AI. Likewise, you never know which of these AIs has maybe even reached a pre-stage of conscience. How would a human know?

Is it fun to read "Scary Numbers" for a human?
Maybe! Try it out!