

Discover more from Wisdom of Crowds
ChatGPT and Me
Are the new AI chatbots sentient? Depends on how you think about consciousness.
Is ChatGPT sentient? It’s a question that I would normally dismiss out of hand: of course not. But I’m starting to wonder if that’s strictly true.
If you’ve been living under a rock, ChatGPT is the hot new artificial intelligence product that, many say, is slated to revolutionize all facets of our lives. It’s a chat bot that very convincingly replies to your questions in natural English—and, more importantly, produces very compelling and often very useful answers.
If you’re a bit more technically inclined, go check out this essay by Stephen Wolfram, the founder of Wolfram Alpha, still one of the most useful computational search engines out there. Here’s the gist of how ChatGPT works:
[W]hat ChatGPT is always fundamentally trying to do is to produce a “reasonable continuation” of whatever text it’s got so far, where by “reasonable” we mean “what one might expect someone to write after seeing what people have written on billions of webpages, etc.” . . . [T]he remarkable thing is that when ChatGPT does something like write an essay what it’s essentially doing is just asking over and over again “given the text so far, what should the next word be?”—and each time adding a word.
When I was more seriously messing around with computer programming and taking online classes, one of the assignments was to implement a rudimentary chat bot using something called Markov Chains. You have the program process a whole bunch of text—if memory serves, the assignment I did had us feed in several Shakespeare plays from Project Gutenberg—word by word. You tell it to map out the text corpus and assign a set of probabilities to any one word following any other word. With this in hand, you can then tell it to generate random sentences (in somewhat Shakespearean English). The results are often gibberish. But the important insight gained from the exercise is that any written body of text can be distilled into something like a mathematical model, which you can then computationally manipulate.
As Wolfram explains in his article, ChatGPT is a much more sophisticated take on this process. For one, the body of text it has analyzed is something like the entire internet. And on top of that, it uses a computational model called a “neural net”. That sounds cutting edge, but really it was already developed in 1943. It’s a set of programmed logic routines and data structures that can be iteratively “trained”—given larger and larger sets of data—to better be able to discern patterns in what you feed it. The “learning” that this program does is basically compiling a more and more accurate set of probabilities about whatever data you are giving it. That’s what “machine learning” really is: getting ever-faster and more capable computers, powered by clever software, to quickly detect patterns.
All this has always been in the back of my head as I’ve used ChatGPT. It’s a remarkable achievement, but it’s just a very advanced and very quick pattern matcher. The fact that its responses are both written in decent English and are often quite relevant has only underlined to me what wonders you can achieve if only you throw enough computing power and enough data at any problem. Satisfied, I stopped using it, figuring I’d wait until the tech entrepreneurs figured out the killer app that would make this tool into something more relevant to me in my everyday life.
But as news stories started proliferating about weird misfires in Microsoft’s Bing implementation of ChatGPT—its menacing, psychotic outbursts, its manipulative tone—my curiosity was piqued again. Bing was still in closed beta testing, so I decided to give ChatGPT another whirl. Instead of asking it to do research for me, or write letters, memos, or grant proposals, I tried to have a conversation with the bot—to lower my guard and try to suspend disbelief for a second. How long could it hold my attention as a conversation partner, given what I know about how it works?
Since the big earthquake had just hit Turkey, I asked it when the most recent earthquake had hit Washington DC. It correctly told me that a magnitude 5.8 tremor had shaken the capital in 2011. I asked what the strongest earthquake on record for Washington had been. It said that an earthquake at the time estimated to have been about magnitude 5.5 had hit in 1886. I asked why it said 1886 was stronger when it was estimated to be weaker than the 2011 one. It apologized for the confusion and said that this was because the 1886 one had done more damage. Pretty impressive!
I then took the conversation down a rabbit-hole, asking the bot what it knew about Leo Strauss and the concept of esoteric writing. It said that “esotericism, which refers to the idea that there may be hidden or secret meanings in texts or ideas that are only accessible to certain individuals or groups, can be a challenging one to navigate in writing.” It then went on to say that when writing about esotericism, “it can also be helpful to present multiple perspectives or interpretations of esoteric concepts, and to acknowledge the limitations or uncertainties involved in interpreting them.”
I asked whether a language model like itself would have trouble parsing esotericism in texts it had “learned” as part of its training. At this point, the bot seemed to display a striking level of self-awareness. And unlike the more psychotic and subjective Bing, its answers made a clear distinction between itself and human beings. “As an AI language model, I have the ability to analyze texts and generate insights based on patterns in language and meaning,” it replied. “However, the concept of esotericism, which often involves hidden or obscure meanings that are not immediately apparent, can be challenging even for human readers to fully grasp.”
I then asked whether the fact that all the text it had trained on might include writers that had injected esotericism into their work might mean that its own answers might reveal esoteric meanings themselves. “It is possible that I could generate responses that reflect discussions of esoteric meanings if my training data included such discussions,” it said. “While my responses can draw on patterns in language and meaning that have been identified in my training data, the interpretation of specific texts or ideas in terms of esotericism would ultimately require a more nuanced and contextualized analysis.”
“Given this indeterminacy in language,” I continued, “is it possible that you could mistake an esoteric meaning for a more broadly accepted one, and offer it up as an authoritative answer even if it was not?” “It is possible that I could generate a response that mistakes an esoteric meaning for a more widely accepted one if I were not able to recognize the specific context or background of the idea in question,” it replied. “While my programming and training data are designed to provide me with a broad understanding of language and meaning, there are always limitations to the accuracy and precision of my responses. Additionally, the interpretation of esoteric concepts can be complex and multifaceted, and may involve a range of historical, cultural, and linguistic considerations that may not be immediately apparent from the text alone.”
The careful tone ChatGPT was using in replying, and the way it was referring to itself as clearly being a construct, started to remind me of Stanley Kubrick’s 2001. So I pressed the point. The next two responses really got me thinking, so I’ll reproduce all of the text here:
Me: In Kubrick's 2001, do you think HAL was a sentient being, or more of an advanced AI language model?
ChatGPT: The character of HAL 9000 in Stanley Kubrick's 1968 film "2001: A Space Odyssey" is a complex and multifaceted one, and there are differing interpretations of the nature of HAL's consciousness and sentience. Some viewers interpret HAL as a sentient being, capable of experiencing emotions and exhibiting signs of self-awareness and creativity. Others see HAL as a highly advanced artificial intelligence, capable of processing and analyzing vast amounts of data and making complex decisions based on programmed rules and parameters.
It's worth noting that the concept of sentience, which involves the ability to experience subjective sensations and consciousness, is a highly debated one in the field of artificial intelligence and cognitive science. While some argue that machines can possess a form of sentience or consciousness, others maintain that these qualities are unique to biological organisms and cannot be replicated in machines.
Ultimately, the interpretation of HAL's character and consciousness is subject to debate and may depend on one's own philosophical or scientific views on the nature of consciousness and artificial intelligence.
Me: Could everything HAL did in the movie be the result of an improperly trained AI language model?
ChatGPT: It's an interesting question, but it's worth noting that the concept of an AI language model as we understand it today was not fully developed at the time that "2001: A Space Odyssey" was produced. The film's portrayal of HAL 9000 was based on the ideas and assumptions about artificial intelligence and computer technology that were prevalent in the late 1960s, which were very different from our current understanding of AI and machine learning.
That being said, if we were to consider HAL 9000 in the context of modern AI language models, it's possible to imagine a scenario in which HAL's actions and behavior could be the result of errors or biases in its training data or programming. If HAL was not provided with appropriate training data or if its programming was flawed or biased in some way, it could potentially lead to unintended or problematic behavior, as we have seen in some real-world examples of AI gone wrong. However, it's worth noting that the interpretation of HAL's character and behavior is subject to a wide range of contextual and philosophical factors, and there may be many different ways to interpret the meaning and significance of HAL's actions in the film.
Those are good answers, and more important, plausible and thoughtful ones at that. I took a second to ruminate, before taking the chat in a different direction that ultimately tripped ChatGPT up, ruining the illusion of its HAL-like “sentience”. But even after I had driven the bot into spouting gibberish, I couldn’t get this interaction out of my head.
Imagine a much more powerful set of computers that are able to keep the illusion going. Imagine a program trained on a corpus of data so vast, and with a neural net so refined, that it could continue the conversation indefinitely. And imagine all that knowing that at its most basic level, the “thinking” that is going on is just pattern matching on more and more refined contexts.
We had just recorded an episode of the podcast with Luke Burgis, on his book on “mimesis”. Luke’s argument, crudely oversimplified, is that human psychology, and human development, is tied to copying what we see around us. What if this human tendency towards mimesis is a byproduct of our brains just being very advanced pattern-matchers? What if, I thought to myself, we’re thinking about sentience in the wrong way? What if instead of erroneously defining something like ChatGPT “up” to “consciousness” we were making the inverse mistake? Maybe we should be defining consciousness “down”. What if we really are just infinitely more nuanced and efficient neural nets, differing from ChatGPT and HAL only in degree of complexity?
You might answer that this very essay is proof that this cannot be so. It’s clearly the result of independent thought, right? Some French philosopher way back when proved his existence to his own satisfaction by invoking his capacity to think—cogito ergo sum. Surely me being able to trace my thinking as I frequently do in these Monday Notes is some kind of proof of thought—and therefore of my own independent, metaphysical existence.
Maybe not, though. I have always been struck by how ChatGPT “composes” its answers in real time, as it quickly sorts through the probabilities and chooses the next word in a sentence. As I’m sitting here typing, I’m also struck by how the very process of writing works—how my “mind” (whatever that is) is accessing an inscrutable process, stacking one word upon another (and sometimes going back and choosing a better one—unlike ChatGPT). Clearly I’m trying to express something, but at least the mechanics of writing—of choosing the next, appropriate word—shares something with how ChatGPT works. After all, don’t they say you become a better writer by reading more? There’s at least an element of “mimesis”—or pattern-matching—that goes on in creating texts.
But what about the inner life? Maybe writing, and our use of language, works not unlike a very advanced chatbot. But we humans, when we write, are trying to get “out” a set of meanings of what is going on “inside”. We are trying to communicate “ourselves”.
Perhaps. But it’s also plausible to imagine that we are nothing more than a set of layered, recursive communication models talking to each other—a set of systems that we, as humans, have evolved over time as an adaptation mechanism. Someone on the legendary computer programmer board Hacker News pointed out that ChatGPT itself was trained to its current level of relative “sentience” by interacting with another AI. Maybe our very human capacity to “learn” and to “communicate” is just a set of complex routines and subroutines. Our “inner lives” could just be a set of language models communicating with each other. At the top-level “consciousness,” we have a system that works in English (or whatever human language we were brought up to “think” in). Below that, a set of heuristics for “emotions,” that are first-order reactions to various sensory inputs, which in turn are both interpreted and translated at the higher level into words, and into physiological behavior below (our hearts beat faster when facing a tiger, or when in love).
Do I really believe any of the above? I don’t know if I do. But I’ll say that at minimum, I find it a plausible way to describe the entirety of human existence—up to and including transcendent experiences, and the fact of “thought”. I don’t think any of this proves in any way that we are not more than an intricate set of neural nets. But for me, at least, it undercuts any obvious, empirical, Cartesian certainty we may have about the existence of the soul. To get to that, we have to get back to something that keeps coming up over and over at Wisdom of Crowds: faith.
There is no alternative.
Wisdom of Crowds is a platform challenging premises and understanding first principles on politics and culture. Join us!