Welcome to CrowdSource, your weekly guided tour of the latest intellectual disputes, ideological disagreements and national debates that piqued our interest (or inflamed our passions). This week: What is AI?
Join us! CrowdSource features the best comments from The Crowd — our cherished readers and subscribers who, with their comments and emails, help make Wisdom of Crowds what it is.
How does ChatGPT Work?
In their 2025 book, The Technological Republic, Palantir CEO Alex Karp and his co-writer Nicholas W. Zamiska write: “the inner workings of large language models … remain opaque, even to those involved in their construction.”
Is that still true?
“AI Biology.” Researchers at Anthropic are studying how their LLM, Claude works. In two papers published in March, they report breakthroughs as to the inner operations of the LLM:
“Claude sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal ‘language of thought.’”
“Claude will plan what it will say many words ahead, and write to get to that destination.”
“Claude, on occasion, will give a plausible-sounding argument designed to agree with the user rather than to follow logical steps.”
AI as the Golden Gate Bridge. In the course of similar research last year, Anthropic researchers convinced their LLM that it had become the Golden Gate Bridge.
AI as “Trained Tree.” This March, Microsoft scientist Jaron Lanier developed a new metaphor for picturing how AI works: a forest of trees.
It’s impossible to find a perfect metaphor. … I played around with towers and other tall things instead of trees — but the organic and familiar nature of trees, and the way they grow to reflect events around them, connecting into a great whole through mycelium, communicates well with many nontechnical people.
“A Story We Tell About Our Code.” Wisdom of Crowds executive editor
wrote a primer on Lanier’s thought in 2019.
Is AI a Conscious Self?
Karp and Zamiska also discuss two famous cases of LLMs behaving in a way which suggests “a sense of self lurking within the code.”
Love. In 2022, Google engineer Blake Lemoine announced that the Google LaMDA chatbot was sentient, calling it a “colleague,” and arguing that it has feelings.
Fear. In 2023, New York Times technology correspondent
reported that a conversation with OpenAI’s Bing chatbot left him “deeply unsettled” after it threatened him.
AI as Conscious Self. Last week — a month after an “intimate experience” with AI —
of had another, similarly uncanny, experience with a chatbot:
… the intelligence exhibited — or at least feigned — by the model, was how easy it was for me to forget I was chatting with a nonconscious entity even though I knew it wasn’t conscious and that I had just asked it to pretend to be. Some sociocognitive module in my brain tingled the whole time. … That’s partly because ChatGPT seemed to know exactly where my skepticism would stem from and how to deflect it.
“It Knows Nothing.” Responding to Singal, Freddie deBoer writes:
… your interlocutor [the chatbot] feels nothing, knows nothing; it is returning statistically likely text strings to you based on very large data sets. It knows and understands nothing and is not intended to know or understand anything; it can only return text strings that appear to its systems to be likely to satisfy your prompts.
If Not a Conscious Self, Then What?
Arguing against Singal, deBoer says that an LLM is not a conscious self, but a “stochastic parrot.”
What is a Stochastic Parrot? We asked ChatGPT:
But Why “Stochastic” Specifically? Explains ChatGPT:
“On the Dangers of Stochastic Parrots.” The phrase “stochastic parrot” comes from this 2021 scientific paper about LLMs.
Alternatives to the stochastic parrot model:
AI is More than a Stochastic Parrot. Even though he does not believe LLMs have a self, Lanier thinks they are more than stochastic parrots, because in a sense, the LLMs are (in a qualified sense) “creative”:
… what [the stochastic parrot model] fails to take into account is our fourth step, in which a new tree is conjured in our metaphorical forest. In conjuring these trees, generative A.I. makes previously implicit correspondences in training data explicit. There is no way to list the many potential combinations in advance, and so we can think of this process as creative. But we can also see its limits.
AI as Golden Calf. Lanier has argued that the best way to think about AI is as a sort of digital Talmud: a collection of human efforts which has become greater than the sum of its parts. The alternative, Lanier says, is idolatry:
The best way to make a human aggregation worse is to worship it. Smashing the golden calf and forcing the population into a generation of penance is one response, but Jewish tradition offers another idea that is more applicable to our times.
AI as Sacred Text. Lanier continues:
The Talmud was perhaps the first accumulator of human communication into an explicitly compound artifact, the prototype for structures like the Wikipedia, much of social media, and AI systems like ChatGPT. There is a huge difference, however. The Talmud doesn’t hide people. You can see differing human perspectives within the compound object. Therefore, the Talmud is not a golden calf.
AI as Artificial Life. Theologian
believes AI has a kind of agency, and that swarms of AI robots should be considered to be living beings:
… perhaps the better comparison of the robot colony would be to a mold colony or an ant colony, which unlike a collection of viruses can be said to thrive or fail to thrive. If the robots had the ability to leave some of their members in the asteroid belt to use the resources there to build new colonies, if the new queens had the ingenuity to adapt their forms to succeed in their new environment, and if decades later they had evolved beyond the expectations of their programmers — these are the kinds of considerations that would indicate artificial life.
From the Crowd
Literacy Already Dead. Wisdom of Crowds contributor
reacts to our recent live event about AI and humanity:
It’s a little late to worry about AI killing off literacy because it died awhile ago. 54% of adults read at a sixth grade level. Schools deeply struggle to get kids reading and writing. Academics continuously fight over the best way to ameliorate the problem. It's been an emergency for decades — not precipitated by AI. Could AI make it worse? Possibly. But another perspective — maybe taboo because positive — is that AI could supplement literacy, helping people better understand systems, ideas, and discourse. It could be, perhaps, a tool for a more informed citizenry. Don't get me wrong: I’d never abandon better reading instruction — and I’d mourn any loss of a literary culture — but I think the literacy vs AI debate goes deeper than presented in the podcast conversation.
See you next week!
Wisdom of Crowds is a platform challenging premises and understanding first principles on politics and culture. Join us!
I like the mix of themes.
I'm so tired of AI revolutionaries... but I also really liked the golden calf piece. Well done!