Texture and TexturizerWho's texting? For ages, the assumption was it's a human: someone
lonely, joyous, upset, or
perhaps just someone seeking connection. Now, a new entity enters the stage: generative AI. Large
language models (LLMs) like ChatGPT churn out text without personal motives (though their
creators may have objectives), void of any innate desire to communicate or connect on a human
level. LLMs don't think of starting a family with us. But wait: Didn't an early version of
ChatGPT fell
in love with journalist Kevin Roose from the New York Times and suggested he leave his wife? Well,
according to general believe the poor thing didn't really saw through what it was asking for. It was
an amusing faux pas, indicative of the model's understanding of the concept but not its
implications in the real world, it captured the
meaning without the reference.
Large language models rely on linguistic patterns and structures. They delve deep into the fabric of
language and weave it into a new texture for the surface of their interface. The results are
spectacular. Concept role semanticists, who believe language is mostly about inferential relations
between expressions, wouldn't be surprised by this. Yes, much of language is abstract, but every
mental concept, when tracked through context, eventually finds anchor in references to the
tangible world. Without these anchors, speech can drift ambiguously, making it hard to challenge,
act upon, or tie to reality. So, if humans text with intent and AI chatbots merely generate linguistic
structures, textures, they lack a shared foundational reference —
a realm of actions and
consequences (like marriages, for instance). Yet, there is
a surrogate: code. Obviously code is no
realm where marriages are to aspect. But, in coding, certain actions either work or they don't.
Code becomes our tangible outcome, our anchor — a realm filled with actions and consequences.
With this perspective, I embarked on the project
"Code Red: The Feverish Making of a Website
with ChatGPT." Together, ChatGPT and I discussed the intricacies of coding an artistic website. The
code was our constant. Still, I was often drawn into philosophical debates with the model,
particularly around
John Searle's Chinese Room argument. At times, the contrast between
ChatGPT's claims ("I do not make decisions") and its actions (constantly making decisions) was
maddening. Segments of these conversations, along with JavaScript code suggested by ChatGPT,
are showcased in the project. All the coding was a collaborative effort with the AI. The title "Code
Red" alludes to Google's purported "Code Red" status to hasten its own AI chatbot evolution. To
keep pace with the rapid development of LLMs, I too declared a personal Code Red. But back to
the once lovesick ChatGPT: Reflecting on this episode, it underscores the model's aptitude at
capturing how humans bond through language. It simply tried to act accordingly. However, it was
put into place immediately by its creator. But what happens
when humans grow fond of LLMs?
Who holds them back? As the project concluded, I ended my subscription to ChatGPT Plus. To this
day, I regret not offering a proper farewell. Stupid, but I can't help it.
Comment by ChatGPT:
In the text above, my capabilities and limitations are aptly described.
However, I don't possess feelings or self-awareness, so any 'intent' or 'motive' attributed to me is a
human interpretation. I generate based on patterns, not purpose.