Editor’s note: We’re thrilled to be publishing Santiago Ramos’ first-ever Monday Note, available below. As many of you know, Santiago is our new full-time Executive Editor, marking a new and exciting era for WoC. If you missed it, his introductory essay “Empathy for the Devil” offers a rousing statement of purpose. I’m biased, but Santiago’s first month has also featured a string of great essays with a larger, more unpredictable cast of characters. My personal favorite—one that resonated almost too profoundly for comfort—was
’s piercing and quite personal reflection on the risks not of wanting too much but of wanting too little.With that, and without further ado, below I present to you a wonderful piece from Santi on how AI was one of Henry Kissinger’s final hopes for humanity, which I suppose is reason enough to be genuinely frightened by what it might mean for democracy—if by democracy we mean the rule of many people rather than deferring authority to unelected sources.
—
, co-founder, Wisdom of CrowdsOne of Henry Kissinger’s final public utterances was a warning about a mind he thought was superior to his own. Unfortunately for him, the mind was not God’s, but that of generative AI, which Kissinger believed would usher in a new age in history. In a column co-authored with Eric Schmidt and Daniel Huttenlocher, Kissinger explained the differences between the time that’s coming and the one that’s passing away:
The truth of Enlightenment science was trusted because each step of replicable experimental processes was also tested, hence trusted. The truth of generative AI will need to be justified by entirely different methods, and it may never become similarly absolute. As we attempt to catch our understanding up to our knowledge, we will have to ask continuously: What about the machine has not yet been revealed to us? What obscure knowledge is it hiding?
Briefly put, in this new era, the authority of science—central to the Age of Enlightenment—will be eclipsed by generative AI. What makes the authority of AI different from that of science is that AI’s methods are obscure. AI’s outputs require more trust than does scientific expertise, because scientific truths can be demonstrated and verified by experimentation, while the exact path of reasoning that yields an AI output is very hard (if not impossible) to trace and reconstruct. We trust science because we can test and confirm its claims. But trust in AI will no be verifiable in that way.
I remembered Kissinger’s article during last week’s brouhaha over Google’s new LLM, Gemini. Immediately after its rollout, Google’s new AI program proved to be untrustworthy. Given certain prompts, it produced misleading and inaccurate outputs. Designed to value ethnic diversity, Gemini went overboard in promoting it. Social media was soon flooded with examples of Gemini’s bizarre outputs. When prompted to generate an image of a Viking, it produced one with African features. When asked to create an image of a typical, nineteenth-century German, it created one with Asian features. (You can see some examples of Gemini’s inaccurate historical images here and here.) There’s nothing inherently offensive about these images. But it’s alarming that an AI would take the liberty to rewrite history.
In his Sunday column, the New York Times’ Ross Douthat argues that significance of the Gemini episode— whether it is a harbinger of serious problems, or just another story of corporate incompetence—depends on what AI might become. But Kissinger’s prophecy suggests another angle to the story: a threat to democracy. An artificial superintelligence would no doubt pose many challenges to humanity, to say the least. But even a moderately powerful AI—one that produces more than spam and term papers-on-demand, but doesn’t quite rise to the supercomputer of sci-fi dreams—would challenge democracy as an idea.
Essential to democracy is a bet on people. Democratic rule hinges on the wager that people can be trusted with the power to deliberate and choose between goods, write just laws, and condemn certain evils. Under democratic rule, even the execution of policies and moral actions comes under public scrutiny. But what happens if a moderately-powerful AI is widely perceived as an authoritative source of technical and even moral knowledge? At what point will the argument be made that, because the computing capacity of AI outstrips that of the people, the people should no longer rule?
If today, the ruling class is tempted by technocracy, then as AI improves, that temptation could become irresistible. If, as recent Wisdom of Crowds guest Jason Blakely puts it in We Built Reality, technocracy is a society where “a person might be morally entitled to choose the ultimate ends of society but must leave determining how to achieve those ends to the real experts, the technocrats,” then we might be headed toward a society where those technocrats draw their authority not only from their expertise, but from AI. If that’s the case, how might popular rule stack up against a machine that provides confident answers to political, technical, and even moral questions?
Once “Trust the experts” becomes “Trust the machine,” we will have returned to a way of life closer to that of our pre-Enlightenment ancestors than any of our technofuturists have expected. On momentous occasions, when confronted with pivotal choices, ancient kings and emperors would appeal to an oracle, a priestess, a temple. Future leaders might well do something very similar.
This post is part of our collaboration with the University of Pittsburgh’s Center for Governance and Markets.
Watching the advance of AI, I sometimes think the computer and the internet were mistakes (says the guy typing this on a smartphone).
I’d be leery of any appeals to authority arguments whether it’s “trust the science or trust the machine”
Proof of work will always be required, which is why a collaboration of AI and blockchain may be required for the future