In the summer of 1942 Robert Oppenheimer reportedly initiated a conversation with his superior—Arthur Compton—about the possibility that the atomic bomb might in fact ignite both the oceans and atmosphere … globally. According to Compton: “Hydrogen nuclei are unstable and they can combine into helium nuclei with a large release of energy…might not the enormously high temperature of the atomic bomb be just what was needed to explode the hydrogen. And if hydrogen, what about the hydrogen in the sea water.”
The preferred language over the past years from AI safetyists—which tends to include everyone from the apocalyptic voices like Eliezer Yudkowsky over to the fairly measured co-founders of Anthropic—is “alignment.”
Roughly: AI should be consistent with human values, goals, intentions and interests. The worry in this camp tends to be guarding against technological independence. Either that tech will just start creating these extremely volatile and unpredicted scenarios (most famously involving office supplies) or that as AI becomes more intelligent it will engage in agentic forms of deception to the perhaps extreme detriment—or destruction—of humanity.
I tend to take a moderate position on these debates—they are essential empirical questions. Is it the case that AI might end up in a “take-off” scenario in which both capacity and malign intent super-scale—well sure, maybe. Or that we will end up with all kinds of unintended consequences simply as a function of the power of the technology? Well, yes that is happening already. But the scale or speed or consequence of these harms sits I think in the same category as planetary hydrogen ignition—entirely an empirical question, is the world going to turn out this way? It is possible to be scrupulous about the actual empirical trajectory, and we probably should given the potential severity of the harms.
All of this, however, I think is still in the realm of the easy problems of technology.
The Harder One
Over holidays I ended up falling back into reading Frankenstein. The central passages—the creation of the monster, it’s innocence and gradual betrayal, the culminating murders—are as incrementally dread-inducing as I remembered.
What I hadn’t quite remembered is how gravitational the first third of the novel feels. In so many of the best tragedies the experience is one of catastrophic decision followed by absolute inescapable inevitability. So the Lear rebuffs Cordelia and everything else tumbles down the slope in predictable cataclysm.
The inevitability of Frankenstein doesn’t really start the moment the monster opens his eyes however, but rather from the first pages of the novel with the sense that Dr. Frankenstein himself is somehow already possessed by his own utterly inescapable self-obsession. Even if he had been shown perfectly the outcomes of his decisions he seems entirely helpless to stop himself as he gradually puts the steps in place to create the monster.
The inevitability of Frankenstein’s creation of the agent of his own destruction seems to me to confront us with a question that I think nearly everyone has been avoiding regarding how our species relationship to technology as a whole (not just LLMs) seems very frequently to work. Setting out the premises:
Humanity seems only to be able to realize its potential by technological development.
At a basic level this has universally undergirded how our species has survived with as much physical fragility as we have—from basic shelter to food preparation to self-defense.
But technology also seems entirely intertwined with our relentless pursuit of our higher potentialities—from books, to city planning, to military capacities.
3. As these technological trajectories grow, they not only expand human ability, but also amplify the stakes of competitive dynamics within our species making it so that the stakes of not creating grow.
4. The benefits of escalating technological stacks grow (art, expanding life spans) but so too, the possibilities for catastrophic risk.
5. Hence Frankenstein and his monster.
To put the problem simply then: Is humanity’s relationship to technology both inevitable and intrinsically tragic?
There have of course been a significant number of people who have been pressing at these themes well before the age of AI. To take one example, MLK decided to his 1964 Nobel lecture not principally to talk about poverty or racial division, but specifically to warn that something about humanity’s relationship to technology seems to be “the most pressing problem confronting mankind today.” As he frames the issue:
There is a sort of poverty of the spirit which stands in glaring contrast to our scientific and technological abundance… So much of modern life can be summarized in that arresting dictum of the poet Thoreau: “Improved means to an unimproved end.” This is the serious predicament, the deep and haunting problem confronting modern man. If we are to survive today, our moral and spiritual “lag” must be eliminated. Enlarged material powers spell enlarged peril if there is not proportionate growth of the soul. When the “without” of man’s nature subjugates the “within”, dark storm clouds begin to form in the world.
In King’s telling he seems to think at a basic level that we do actually have agency here. That our attraction to technological development contrasts with an alternative trajectory we could be taking, namely pursuing a science of ends not just of means in Thoreau’s sense.
The ends/means distinction seems helpful and I’ll return to it—but I think the problem is more difficult than King gives credit. The sense of humans as technological beings is somehow electrifying—What if we could just eliminate cancer? What if unhappiness or boredom or depression are just matters of developing more modes of stimulation or pharmacology? Why not envision terraforming other worlds? Or gamifying the whole business of romance?
And furthermore, the question of technology is also social. If we don’t develop means then it implies some other company will outcompete us for capital. The markets will be punishing. Or at a grander scale—our geo-political rivals will gain significant advantage and capacity to rain down wrath from the sky.
In other words, there is both a pulling and pushing mechanism—we are both intrinsically drawn in fascination by the quasi-infinity of creating (τέχνη in Greek means not just “art” or “skill” but “cunning of the hand”) and also punished severely by falling behind.
None of this is to take a sharply negative stance on technology itself. It may well be that 95 percent of what we make is actually beneficial to have in existence vis a vis humanity. The question has to do rather with agency. Do we actually have capacity relative to various possibilities—negative or positive to decide at all—or are we driven by some strange determinism, social or individual?
Hence, however, the increasing inevitability of the Frankenstein situation—growing knowledge that we should not make what we can’t not make.
On my assessment, the problem I’m mapping is genuinely, not superficially hard.
On the one hand it is hard contingently—because we live in a trajectory of history that has filtered very much for highly competitive global dynamics both at the level of economy and national security. Once you are embedded in the logic of that system, so many decisions are actually made for you (as, for example, OpenAI’s recent announcement that they will sell ads).
But it’s also hard at a more intrinsic level—human nature seems both essentially vulnerable (hence technology) and essentially filled with highly expansive desires (hence technology again).
I also am fairly convinced that once we start probing at this problem it will probably come to seem harder initially, not easier. There has been a fair amount of discussion of the frustrating reality of having to make technology simply out of the logic of competition.
But there is much less discussion of how much our notion of progress also seems to be in a strange relationship with agency. On the one hand, developing a new technology expands agency (I can now sit in my house and communicate with thousands and thousands of people). But it also at a different level contracts agency—we now live in a world of social media suicides and viral politicians. Or at a more fundamental level, even if all the top AI frontier labs decided simultaneously to halt their research, the basic technological paradigm of LLMs is now in the wild with all that that implies.
Ends
I don’t think this issue is going to be resolved easily, but I also think that it is helpful to specify that at a fundamental level the most difficult problem of technology is actually a very familiar problem of humanity—what is it that we are or want?
Here I think King is actually very prescient. The activity of deferring primary questions of ends—whether out of fear of conflict or the sense of that we simply prefer more pragmatic activity—has left us, predictably with an era of means in every sense of that word. It’s an era of massive and growing wealth disparity. An era in which we end up developing technologies only to realize their effects later. An era in which we have expanding capacity to realize our desires without really understanding why we should want any specific thing in particular. When the creature finally opens its eyes a third of the way into the novel, Dr. Frankenstein already knows he is looking directly in the mirror:
It was already one in the morning … and my candle was nearly burnt out, when, by the glimmer of the half-extinguished light, I saw the dull yellow eye of the creature open; it breathed hard, and a convulsive motion agitated its limbs.
Here, however, my suspicion is that we will never get out of this trajectory by diminishing our longings. If the problem of technology is actually a problem of humanity—our way out is always and only to go further in.
Wisdom of Crowds is a platform challenging premises and understanding first principles on politics and culture. Join us!



