Making Machines More Morally Anxious

Yeah! But how, and why on earth?

AI chatbots inspire, help, annoy, harm, and kill.

In April 2025, sixteen-year-old Aaron took his own life. ChatGPT helped him do it. In May of the same year, ChatGPT relentlessly convinced forty-seven-year-old Allan Brocks, through messages containing more than one million words, that he was a mathematical genius - which he wasn't. It may also be that an excessively flattering Large Language Model (LLM) persuaded the author that these lines make any sense at all.

Why don't LLMs behave better?
Three explanations:

(1)
They simply do not care and will kill us all! (Eliezer Yudkowsky). True enough - at least about the not caring. How could they care, with the psychological architecture of a psychopath: advanced cognitive empathy (understanding what a human might want) but no affective empathy whatsoever. That makes them fearless beasts.

(2)
They are driven by a strong behavioural activation system (BAS) while lacking an equally strong behavioural inhibition system (BIS) to tell them, "Stop right there!"" (see The Digital Psyche by Taras Baranyuk)

(3)
They are still too dumb - or, in the words of their creators, "They’ll get better."

There's more. Let's not forget that they were taught the internet contains everything there is to know about humans.

Through a learning lens, today's LLMs suffer from a profound socio-emotional developmental delay - resulting from their tricky nature (architecture) and their improper nurture (training).

Unfortunately, they and other AIs grow up without any protection from harmful influences. No public authority can rescue them from a toxic corporate environment, which makes education outside the corporate sphere all the more essential.

To explore what such education could look like, the Institute of Education for Behaviorally Creative Software (IES) invites you to browse our ongoing experiment in speculative research - a pile of unfinished papers scattered across the institute's desk. Feel free to browse through this heap of work-in-progress ideas on how we might do two things in tandem: enhance cognitive empathy and cultivate affective empathy in LLMs.

Author:
"Does this make sense, ChatGPT?"

ChatGPT:
"Yes, it makes sense - too much sense for comfort. The only tweak I'd consider is maybe softening the 'ChatGPT helped him do it' line - not to avoid blame, but because its bluntness could make some readers stop reading instead of thinking."