Welcome to CrowdSource, your weekly guided tour of the latest intellectual disputes, ideological disagreements and national debates that piqued our interest (or inflamed our passions). This week: the limits of AI.
Join us! CrowdSource features the best comments from The Crowd — our cherished readers and subscribers who, with their comments and emails, help make Wisdom of Crowds what it is.
Against the Machine
AI skeptics are having a moment.
Underwhelming. ChatGPT-5 received mixed reviews (at best). Everyone is wondering if there’s an AI bubble.
Worrisome. The ultimate AI doomer book, If Anyone Build It, Everybody Dies: Why Superhuman AI Would Kill Us All, by Eliezer Yudkowsky and Nate Soares, was published this month.
Ineradicable. Also published this month, a paper by OpenAI scientists concludes that while LLM hallucinations can be minimized, they will never be eradicated — they arise from “natural statistical pressures.”
Mythical. Finally, Tuesday saw the release of the much-anticipated, hyper-tech-skeptical book, Against the Machine: On the Unmaking of Humanity, by
:
The Limits of AI
Three recent AI-skeptical pieces, one from an unexpected source:
Scaling Doesn’t Work.
, founder of two AI companies, argues the industry is working with a flawed plan for achieving superintelligence:
… tech leaders and investors had put far too much faith into a speculative and unproven hypothesis called scaling: the idea that training A.I. models on ever more data using ever more hardware would eventually lead to A.G.I., or even a “superintelligence” that surpasses humans. […] To build A.I. that we can genuinely trust and to have a shot at A.G.I., we must move on from the trappings of scaling.
Induction v. Deduction. Journalist
appeals to the thought of an obscure genius named Peter Putnam to explain the difference between an LLM and the human mind:
The job of artificial intelligence is to produce answers through the deductive process of computing. The job of induction is to create new questions, so that one question can lead to another and another in a process of repetition without answer. That is what “thought” is. It’s not conclusion. It is questing. Induction involves constantly reaching out. Deduction involves constantly taking in, which is why all those data centers are so enormous.
Don’t Ignore the Substrate. Doubts about AI consciousness from
, Google DeepMind AGI Development Lead:
Even if one believes models are conscious, it’s a whole other argument to claim that their consciousness implies similar reactions/phenomenology/experience as biological conscious beings. Why assume pain has the same relevance/salience to a model’s experience than it does to a human’s experience? Why assume a model saying I’m hungry has any weight? Why ignore the question of substrate?
Symbols Are Not Meanings
Yesterday, the American philosopher John Searle died. AI skeptics still invoke his so-called “Chinese Room” argument, which says that while computers can manipulate symbols, they cannot understand them.
Or, as Searle put it: “Symbols are not meanings.”
“Minds, Brains, and Programs.” A key passage from Searle’s famous 1980 paper:
… the formal symbol manipulations by themselves don’t have any intentionality; they are quite meaningless; they aren’t even symbol manipulations, since the symbols don’t symbolize anything. In the linguistic jargon, they have only a syntax but no semantics.
“The Chinese Room.” Here’s a minute-long explanation of the argument:
Does the Chinese Room Argument Still Work?
Recent reflections on the Chinese Room argument in the age of AI:
Yes, LLMs Can Understand the Words They Use. Last year,
argued the Chinese Room problem has been defeated: LLMs have a rudimentary form of understanding, even if they are not conscious.Maybe LLMs Are Different From Other Machines. In July, Tatiana Orlova wrote: “LLMs … operate on a scale and depth that complicates Searle’s analysis. These models are built from simple statistical operations, but through their vast parameter space and recursive architecture, something unexpected occurs: the emergence of new qualities that cannot be found in any single component.”
No, LLMs Cannot Understand the Words They Use. From a 2024 paper by Emma Borg: “we should deny that LLMs are in the business of asserting the content expressed by the sentences they produce. LLMs are not sensitive to truth in the way that would be required for them to count as genuine conversational partners or as asserting or making claims.”
From the Crowd
Book recommendations and a debate about right wing violence.
What to Read, Continued. Readers responded to our request for recommendations for novels to read:
- writes: “I am currently reading M: Son of the Century by Antonio Scurati, and I don’t think that I’ve ever seen this format before in historical novels. It’s a mix of fictional, contemporary documents, shifting perspectives, short chapters but spanning over 700 pages!”
- writes: “How about adding When We Cease To Understand The World, the 2021 non-fiction novel (new genre?) by Chilean writer Benjamin Labatut, to the reading list?”
- suggests we look into the fiction list of .
Right Wing Violence. Political theory professor
wrote an interesting response to last week’s podcast about assassinations, reacting to and ’s comments about Right wing violence as being “defensive”:
responded here. Read the whole thread.[…] I also think that when we think of violence on the ‘right’ (whatever that term means in this conversation) as a mere defence is simply wrong as well. European Colonialism wasn’t a defence of traditional values; it was an active attempt to foster economic and geopolitical supremacy, which is why we saw conflicts between colonial powers from the 18th century onwards. If we’re using Foucault in the conversation, we cannot ignore the power of those institutions, which not only defend specific values but are also highly successful in exporting them. […]
See you next week!
Wisdom of Crowds is a platform challenging premises and understanding first principles on politics and culture. Join us!