Welcome to CrowdSource, your weekly guided tour of the latest intellectual disputes, ideological disagreements and national debates that piqued our interest (or inflamed our passions). This week: the rollercoaster of the hype cycle.
Instead of showing all sides, this we’re going to try to cut through the deafening noise surrounding two stories dominating our public life at the moment.
Framing thought
Pumping the brakes on AI accelerationism
There is plenty of overheated, millenarian speculation about AI out there, feeding a vicious hype cycle that ebbs and flows. Last week’s maelstrom was particularly bad, swirling around a newly launched social network for AI bots called Moltbook. A thousand thought-pieces bloomed.
But, as it turns out, it was all a hoax.
1. MIT Technology Review summarizes the evidence that Moltbook was not the first terrifying sign of any kind of autonomous intelligence:
Not only is most of the chatter on Moltbook meaningless, but there’s also a lot more human involvement that it seems. Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots. But even the bot-written posts are ultimately the result of people pulling the strings, more puppetry than autonomy.
Key point:
Moltbook looks less like a window onto the future and more like a mirror held up to our own obsessions with AI today. It also shows us just how far we still are from anything that resembles general-purpose and fully autonomous AI.
2. Cui bono? Shlomo Klapper, an entrepreneur making AI legal software, accuses AI labs of hyping anthropomorphism to encourage us to emotionally engage with their chat products:
Last month, the company’s CEO Dario Amodei published a 19,000-word essay on AI risk. Buried in the technical discussion is a revealing passage. According to Amodei, Claude’s “fundamental mechanisms … originally arose as ways for it to simulate characters in pretraining, such as predicting what the characters in a novel would say.” The constitution that governs Claude’s behavior functions as “a character description that the model uses to instantiate a consistent persona.”
Anthropic’s own CEO is telling you how the system works: Claude is a character simulator. The character it currently simulates is “an entity contemplating its own consciousness.”
Pretraining teaches Claude to predict text. Post-training, in Amodei’s words, “selects one or more of these personas” rather than creating genuine goals or experiences. Neither step requires consciousness. Neither step produces it. The relationship between training phases is mathematical optimization, not the emergence of phenomenal experience from matrix multiplication.
3. Kai Williams unpacks more of just how — and why — the “personhood” of these personality simulators is baked in by the frontier labs:
Every LLM you’ve interacted with began its life as a base model. That is, it was trained on vast amounts of Internet text to be able to predict the next token (part of a word) from an input sequence. … Base models learn to understand and mimic the process generating an input.
[…]
While this mimicry is impressive, base models are difficult to use practically. If I prompt a base model with “What’s the capital of France?” it might output “What’s the capital of Germany? What’s the capital of Italy? What’s the capital of the UK?...” because repeated questions like this are likely to come up in the training data.
However, researchers came up with a trick: prompt the model with “User: What’s the capital of France? Assistant:”. Then the model will simulate the role of an assistant and respond with the correct answer. … Just telling the model to role-play as an “assistant” is not enough, though. The model needs guidance on how the assistant should behave.
In late 2021, Anthropic introduced the idea of a “helpful, honest, and harmless” (HHH) assistant. An HHH assistant balances trying to help the user with not providing misleading or dangerous information. At the time, Anthropic wasn’t proposing the HHH assistant as a commercial product — it was more like a thought experiment to help researchers reason about future, more powerful AIs. But of course the concept would turn out to have a lot of value in the marketplace.
Keep this backstory firmly in mind when you try to make sense of Anthropic’s grandiose-sounding Constitution for Claude.
4. These infernal machines have other deficiencies that make them fall far short of actual agency. As Tencent’s researches point out, they can’t learn in real time:
Over the past few years, language models have become astonishingly capable. Frontier systems can now solve International Mathematical Olympiad problems, navigate complex coding challenges, and pass professional exams that take humans years to prepare for. They excel at taking tests, spinning out long chains of reasoning to win at benchmarks.
But as impressive as these feats are, they obscure a simple truth: being a “test-taker” is not what most people need from an AI.
Look at our own daily work. A developer skims documentation for a tool they’ve never seen and immediately starts debugging. A player picks up a rulebook for a new game and learns by playing. A scientist sifts through complex experimental logs to derive a new theorem from fresh data.
In all these cases, humans aren’t relying solely on a fixed body of knowledge learned years ago. We are learning, in real-time, from the context right in front of us.
Current language models do not handle context this way. They rely primarily on parametric knowledge—information compressed into their weights during massive pre-training runs. At inference time, they function largely by recalling this static, internal memory, rather than actively learning from new information provided in the moment.
5. More harshly put, AGI is still likely very far away— if it comes at all:
Tell it your name and it does not remember. The only reason it looks like memory is because scaffolding keeps shoving your name back into the prompt every time and sanitizing the output.
The model itself has no idea who you are and cannot learn from interaction. It is structurally incapable.
And the scaffolding is the worst part. It is pure duct tape. Just prompts on prompts on prompts around a frozen model. When something breaks, nobody fixes learning. They add another layer. Another rule. Another retry. Another evaluator model judging the first model.
So you end up with systems that are insanely complex but mentally shallow. Debugging is hell because behavior comes from hack interactions, not a learnable core. Tiny prompt tweaks cause wild behavior shifts. Latency goes up. Costs go up. Reliability goes down. None of this compounds into intelligence. It just hides the cracks.
Until we have real persistent learning and real memory inside the system, there is no AGI.
Pumping the brakes on Epstein maximalism
Unlike the overheated, apocalyptic rhetoric surrounding AI, it’s important to be superficially circumspect about the ultimate truth of the Epstein case. After all, while the Singularity may prove to be disastrous for life as we know it, it’s still very much in the realm of science fiction. The Epstein case involved real horrors, so minimum decency demands some level of rhetorical forbearance.
Still, like in all conspiracy theories, the lurid allegations can be exasperating. Epstein was running an extortion ring on the rich and the powerful? For Mossad (ahem, the Jews)? For the Russians?
Anyway…
1. Gilad Edelman, with the most polite version of the BLUF — beyond Epstein’s own criminal perversions, there’s not much to any of the conspiracy theories:
Epstein is known to have paid or coerced dozens, possibly hundreds, of teen girls, some as young as 14, to perform sexual acts. Beyond what has been proved, the conventional wisdom holds that Epstein built his network by trafficking teen girls to other powerful men, whom he then blackmailed to generate his mysterious wealth; that his private plane and island were essentially brothels; and that even friends who didn’t participate in his crimes were surely aware of them, and chose to consort with him anyway.
Those assumptions are all widely held—but poorly substantiated. The Epstein files reveal plenty of powerful people to have tolerated or participated in disgusting and shameful behavior. Far from shedding light on a grand conspiracy, however, the files bolster the case that although terrible crimes were committed, there never was a larger conspiracy to begin with.
2. Michael Tracey is less polite and more pungent:
“We are aware of the theories circulated in the media and online that Epstein video recorded the abuse of his victims, including by other men, but we have found no evidence to support that theory. Indeed, had we found such videos, we certainly would have used them as evidence in the criminal cases we investigated and prosecuted and would have pursued any leads they generated. We did not, however, locate any such videos.”
“During the searches of Epstein's New York residence and USVI residence in 2019, the FBI searched for videos and surveillance cameras. My understanding from the case agent is that there were no cameras found inside any bedrooms or living areas of either residence.”
Also worth reading Tracey on how thorough the Feds were with some
3. Matthew Schmitz a few months back on why the Jewish angle remains so compelling to so many:
At the heart of the Epstein myth is the idea that an international conspiracy, notably involving Jews, relies on the destruction of innocence and the practice of blackmail to advance its power. This tradition presents sexual abuse of minors as a typically Jewish crime, an assault on innocence that amounts to ritual murder.
This trope emerged in a recent conversation between Tucker Carlson and the podcaster Daryl Cooper. Carlson claimed that Epstein was part of “a blackmail operation run by the CIA and the Israeli intel services, and probably others. . . . The usual darkest forces in the world colluding to make rich and powerful people obey their agenda.” Cooper added that Epstein’s crimes (as he perceives them) should be understood as a ritual practice that confers power on those who perform it: “Throughout history, people have looked at that as something that confers power. That’s what child sacrifice is.”
The idea that our elites are united in a pedophilic cabal has several recent precedents, including the satanic panic of the 1980s and the QAnon phenomenon during the first Trump term. But to understand the full significance of this exchange, one must revisit the history of the blood libel.
4. Jason Willick a few weeks back showed the damage that releasing the Epstein files trove has wrought — and the complicity of august publications in maximizing the fallout:
One unlikely account sent in after Epstein’s 2019 arrest says, “I was taken to an underground location … I was kept in a stall, it looked like a horse stall.” The fact that the email appears to be completely unverified doesn’t stop it from being circulated online with the imprimatur of the “Epstein files.” The New Yorker published an article describing a letter in the files purportedly sent by Epstein to imprisoned sexual abuser Larry Nassar. The forged letter made reference to President Donald Trump. The New Yorker acknowledged that it was probably fake, but averred: “The case for this President’s indecency hardly requires putting a dubious letter into evidence.” How’s that for a drive-by smear?
[…]
As for previously unknown participants in Epstein’s crimes, none have been identified. If the Justice Department had enough evidence to prosecute others, it presumably would have done so (either in Attorney General William P. Barr’s Justice Department, or Merrick Garland’s, or Pam Bondi’s). If other people are now under criminal investigation, their identities can be withheld for that reason, per the legislation. So if anyone’s reputation is tarnished in this process, it will most likely be for noncrimes.
What’s the heck is going on?
1. Populism, baby! Matthew Walther’s 2019 essay on Pizzagate remains a kind of Rosetta Stone for understanding the logic of our moment. We are all low-information voters now:
We should keep all of this in mind the next time we feel inclined to sneer at so-called “low-information voters,” especially the kookier sort. You know the people I mean. Wackos. Gun nuts. 8channers. Conspiracy theorists in Middle America who watch InfoWars (one of the few journalistic outlets to discuss the issue of pedophilia regularly) and post about QAnon and “spirit cooking” and the lizard people. The news that a globalized cabal of billionaires and politicians and journalists and Hollywood bigwigs might be flying around the world raping teenaged girls will not surprise them in the least because it is what they have long suspected. For the rest of us it is like finding out that the Jersey Devil is real or turning on cable news and finding Anderson Cooper and his panel engaged in a matter-of-fact discussion of Elvis’s residence among the Zixls on the 19th moon of Dazotera.
2. Nick Land is not someone to take too seriously, but his “accelerationism” is not bad as shorthand for the disordered thinking that increasingly besets us:
In philosophical terms, the deep problem of acceleration is transcendental. It describes an absolute horizon – and one that is closing in. Thinking takes time, and accelerationism suggests we’re running out of time to think that through, if we haven’t already. No contemporary dilemma is being entertained realistically until it is also acknowledged that the opportunity for doing so is fast collapsing.
Instead of celebrating this as Land and his misguided followers do, maybe it’s best we strive to cultivate ultimate skepticism.
Wisdom of Crowds is a platform challenging premises and understanding first principles on politics and culture. Join us!









One way to pump the brakes on the AI hype cycle may be to begin ignoring pundits who use the term 'AGI'. There's no agreement as to what AGI is, so evaluations of progress towards AGI or assessments of whether it is near or far away are, as far as I can tell, largely nonsense. What counts are the capabilities of actual (and fallible) AI systems vs those of actual (and fallible) humans. On that front, there is much that it interesting and important to report.
Just as a note, exactly a year ago, Blili-Hamelin and 15 other "AI researchers", put out a paper called "stop treating AGI as the north-star goal of AI research. I don't necessarily endorse all the arguments in the paper, and I can't vouch for the qualifications of the authors (I try to keep in mind that even anti-AGI writers are benefitting from the hype around it), but I couldn't agree more with the sentiments in the title. ( Link: https://arxiv.org/pdf/2502.03689 )
Your link to Jason Willick is broken. The other links were very interesting, thank you for some food for thought.