In the summer of 1942 Robert Oppenheimer reportedly initiated a conversation with his superior—Arthur Compton—about the possibility that the atomic bomb might in fact ignite both the oceans and atmosphere … globally. According to Compton: “Hydrogen nuclei are unstable and they can combine into helium nuclei with a large release of energy…might not the enormously high temperature of the atomic bomb be just what was needed to explode the hydrogen. And if hydrogen, what about the hydrogen in the sea water.”
The preferred language over the past years from AI safetyists—which tends to include everyone from the apocalyptic voices like Eliezer Yudkowsky over to the fairly measured co-founders of Anthropic—is “alignment.”
Roughly: AI should be consistent with human values, goals, intentions and interests. The worry in this camp tends to be guarding against technological independence. Either that tech will just start creating these extremely volatile and unpredicted scenarios (most famously involving office supplies) or that as AI becomes more intelligent it will engage in agentic forms of deception to the perhaps extreme detriment—or destruction—of humanity.
I tend to take a moderate position on these debates—they are essential empirical questions. Is it the case that AI might end up in a “take-off” scenario in which both capacity and malign intent super-scale—well sure, maybe. Or that we will end up with all kinds of unintended consequences simply as a function of the power of the technology? Well, yes that is happening already. But the scale or speed or consequence of these harms sits I think in the same category as planetary hydrogen ignition—entirely an empirical question, is the world going to turn out this way? It is possible to be scrupulous about the actual empirical trajectory, and we probably should given the potential severity of the harms.
All of this, however, I think is still in the realm of the easy problems of technology.
The Harder One
Over holidays I ended up falling back into reading Frankenstein. The central passages—the creation of the monster, it’s innocence and gradual betrayal, the culminating murders—are as incrementally dread-inducing as I remembered.
What I hadn’t quite remembered is how gravitational the first third of the novel feels. In so many of the best tragedies the experience is one of catastrophic decision followed by absolute inescapable inevitability. So the Lear rebuffs Cordelia and everything else tumbles down the slope in predictable cataclysm.
The inevitability of Frankenstein doesn’t really start the moment the monster opens his eyes however, but rather from the first pages of the novel with the sense that Dr. Frankenstein himself is somehow already possessed by his own utterly inescapable self-obsession. Even if he had been shown perfectly the outcomes of his decisions he seems entirely helpless to stop himself as he gradually puts the steps in place to create the monster.
The inevitability of Frankenstein’s creation of the agent of his own destruction seems to me to confront us with a question that I think nearly everyone has been avoiding regarding how our species relationship to technology as a whole (not just LLMs) seems very frequently to work. Setting out the premises:
Humanity seems only to be able to realize its potential by technological development.
At a basic level this has universally undergirded how our species has survived with as much physical fragility as we have—from basic shelter to food preparation to self-defense.
But technology also seems entirely intertwined with our relentless pursuit of our higher potentialities—from books, to city planning, to military capacities.
3. As these technological trajectories grow, they not only expand human ability, but also amplify the stakes of competitive dynamics within our species making it so that the stakes of not creating grow.
4. The benefits of escalating technological stacks grow (art, expanding life spans) but so too, the possibilities for catastrophic risk.
5. Hence Frankenstein and his monster.
To put the problem simply then: Is humanity’s relationship to technology both inevitable and intrinsically tragic?
There have of course been a significant number of people who have been pressing at these themes well before the age of AI. To take one example, MLK decided to his 1964 Nobel lecture not principally to talk about poverty or racial division, but specifically to warn that something about humanity’s relationship to technology seems to be “the most pressing problem confronting mankind today.” As he frames the issue:
There is a sort of poverty of the spirit which stands in glaring contrast to our scientific and technological abundance… So much of modern life can be summarized in that arresting dictum of the poet Thoreau: “Improved means to an unimproved end.” This is the serious predicament, the deep and haunting problem confronting modern man. If we are to survive today, our moral and spiritual “lag” must be eliminated. Enlarged material powers spell enlarged peril if there is not proportionate growth of the soul. When the “without” of man’s nature subjugates the “within”, dark storm clouds begin to form in the world.
In King’s telling he seems to think at a basic level that we do actually have agency here. That our attraction to technological development contrasts with an alternative trajectory we could be taking, namely pursuing a science of ends not just of means in Thoreau’s sense.
The ends/means distinction seems helpful and I’ll return to it—but I think the problem is more difficult than King gives credit. The sense of humans as technological beings is somehow electrifying—What if we could just eliminate cancer? What if unhappiness or boredom or depression are just matters of developing more modes of stimulation or pharmacology? Why not envision terraforming other worlds? Or gamifying the whole business of romance?
And furthermore, the question of technology is also social. If we don’t develop means then it implies some other company will outcompete us for capital. The markets will be punishing. Or at a grander scale—our geo-political rivals will gain significant advantage and capacity to rain down wrath from the sky.
In other words, there is both a pulling and pushing mechanism—we are both intrinsically drawn in fascination by the quasi-infinity of creating (τέχνη in Greek means not just “art” or “skill” but “cunning of the hand”) and also punished severely by falling behind.
None of this is to take a sharply negative stance on technology itself. It may well be that 95 percent of what we make is actually beneficial to have in existence vis a vis humanity. The question has to do rather with agency. Do we actually have capacity relative to various possibilities—negative or positive to decide at all—or are we driven by some strange determinism, social or individual?
Hence, however, the increasing inevitability of the Frankenstein situation—growing knowledge that we should not make what we can’t not make.
On my assessment, the problem I’m mapping is genuinely, not superficially hard.
On the one hand it is hard contingently—because we live in a trajectory of history that has filtered very much for highly competitive global dynamics both at the level of economy and national security. Once you are embedded in the logic of that system, so many decisions are actually made for you (as, for example, OpenAI’s recent announcement that they will sell ads).
But it’s also hard at a more intrinsic level—human nature seems both essentially vulnerable (hence technology) and essentially filled with highly expansive desires (hence technology again).
I also am fairly convinced that once we start probing at this problem it will probably come to seem harder initially, not easier. There has been a fair amount of discussion of the frustrating reality of having to make technology simply out of the logic of competition.
But there is much less discussion of how much our notion of progress also seems to be in a strange relationship with agency. On the one hand, developing a new technology expands agency (I can now sit in my house and communicate with thousands and thousands of people). But it also at a different level contracts agency—we now live in a world of social media suicides and viral politicians. Or at a more fundamental level, even if all the top AI frontier labs decided simultaneously to halt their research, the basic technological paradigm of LLMs is now in the wild with all that that implies.
Ends
I don’t think this issue is going to be resolved easily, but I also think that it is helpful to specify that at a fundamental level the most difficult problem of technology is actually a very familiar problem of humanity—what is it that we are or want?
Here I think King is actually very prescient. The activity of deferring primary questions of ends—whether out of fear of conflict or the sense of that we simply prefer more pragmatic activity—has left us, predictably with an era of means in every sense of that word. It’s an era of massive and growing wealth disparity. An era in which we end up developing technologies only to realize their effects later. An era in which we have expanding capacity to realize our desires without really understanding why we should want any specific thing in particular. When the creature finally opens its eyes a third of the way into the novel, Dr. Frankenstein already knows he is looking directly in the mirror:
It was already one in the morning … and my candle was nearly burnt out, when, by the glimmer of the half-extinguished light, I saw the dull yellow eye of the creature open; it breathed hard, and a convulsive motion agitated its limbs.
Here, however, my suspicion is that we will never get out of this trajectory by diminishing our longings. If the problem of technology is actually a problem of humanity—our way out is always and only to go further in.
Wisdom of Crowds is a platform challenging premises and understanding first principles on politics and culture. Join us!




Thanks for this really nice piece, Sam. In some ways, the LLM discussion increasingly reminds me of Jurassic Park, when the creator of the park, John Hammond, and the scientist, Ian Malcolm, have a clash over the ability to control and even the desire to. Hammond, seeking to make a massive buck, uses the epic science of pseudo-genetics to create a fantastical place which appears to transcend anything that has occurred before. It is only when Malcolm begins to question Hammond's very assumptions and longing for expansive control and evolution that we see things begin to fall apart. The unanticipated human scenarios of greed, clumsiness, and lack of genetic control create a downstream motion of catastrophe, which eventually leads to the park being abandoned.
I think the hope of expansive abundance is seen as good because we assume that things bring us other things. It almost reminds me of the fraudster Jack Abramoff's argument that God wanted us to be wealthy. Like televangelists who con their parishioners into giving them tithes to support their elaborate lifestyles, we assume that cash will give us access to a partner, new experiences, and ergo a 'better life'.
Reading Knut Hamsun's Hunger, re-reading Crime and Punishment and embarking on The School of Night all remind me of the kind of problem you're teasing out here. Just how far are we willing to go to embrace our desires, and at what point do we recognise that what we believe we want perhaps is not what we truly want? I tend to agree with your last point though that we cannot go backwards. Philosophers such as Rousseau have acknowledged this truth and when it's been attempted (by people such as Saint Just) it has ended in anarchy, bloodshed and tyranny.
On the question of agency, I have no doubt that there are many areas where technology diminishes it. Reading countless essays that are all the same is not the curation of agency but its annihilation at the college level. Sure, they may be able to produce poor-quality work at an unbelievably quick rate, so they can do other things, but this diminishes their ability to fulfil themselves in the future. It also lessens their spirit and denies them the opportunity to push their own boundaries to gain a better understanding of what they truly desire. If you never try, then how can you know if you ever want to?
In this case, I am less of the opinion that AI represents a reach for the great frontier than the belief that all frontiers have now been surpassed. It is the epitome of laziness and boredom that is leading to a reliance upon this technology. We are slumbering into a new age of stupidity where we are at liberty to engorge ourselves without realising the consequences. The risk seems to me less than the ability to do something catastrophic, but not doing anything at all in the medium run.
You remind me of a piece that Alan Jacobs wrote a while back. It has a lot going on in it, but the motif that has stuck with me is the part where he draws on a Daoist utopia in which technologies exist but are not used.
https://www.thenewatlantis.com/publications/from-tech-critique-to-ways-of-living
There something challengingly witty about the very idea of a society that has the ability to speed itself up, and yet doesn’t. How hard can it be, to not do a thing? Easy, one might think: you simply don’t do it. But Daoism is a perpetual reminder of the unexpected power of negative space. Not doing things is in fact a difficult art.
It’s also a little mind-boggling to realise that the technologies being renounced are (by our standards) primitive and preindustrial! Returning to knotted cords as an accounting system? Nobody tell this man about computers…
As often happens with these kinds of recommendations, I find more value in seeing the existence of the negative space than in insisting on its use in all situations. The freedom not to do a thing can be hard to see, and correspondingly powerful to have in your toolkit, even if sometimes you’d still choose to act.
I quite like a lot of technology; to take a simple example, I’m unequivocally in favour of kitchen appliances that enable food production as a craft/art, alongside more control of what we eat. Making food for people is also a traditional act of love. What’s not to like? On the other hand, like many, I find my smartphone more ambiguous, and, also like many, I still use it constantly. I can say without regret, however, that I don’t have a car. This is a highly situational choice based on where I live and where I most often need to go, but for me not having a car is a freedom I am glad to be able to have.
Many of the most obviously positive uses of technology are precisely those that enable genuinely creative work. Technologies that help you make things for yourself and people you know can be very nice! They get nicer when paired with technical skill; a sewing machine paired with the expertise (and time) necessary to make your own clothes is a ticket to both personal self-expression and utilitarian quality of clothing that can be hard to achieve in any other way save, I suppose, being even more wealthy and paying a tailor. Musical instruments are nice; again, they require skill. Kitchen appliances have already been mentioned.
Using technology positively is a virtue in many classic ways: dependent on both skill and self-restraint, surprisingly easy to justify on purely selfish grounds, and rather awkwardly aristocratic to recommend, if you’re not careful how you do it. Not that such instincts have always been incompatible with poverty; my mother, as a young woman, aspired to a bicycle and a sewing machine, on grounds that a car might be beyond her means. But industrially made clothes are cheaper than they used to be, and good quality fabric is more expensive, so I suspect her aspirations would no longer carry the sense of economy that they once did, in this day and age.
Personal virtue hardly seems like a complete answer to the social problems of technological acceleration, however. Given enough self-control and personal freedom—including money—a person can arrange a life in which technologies are largely employed to improve life rather than drain it of meaning. But this often involves at least a partial relinquishing of further power: the time taken to live a little more slowly is time not taken to earn even more money and pursue social influence. On a personal level I think it’s probably worth it. On a social level, many people will decide it’s not worth it, or else fail at it, or not even see that they have the choice. Those with the most power and influence and meritocratic “merit” are likely to be among these. And as you note, when it comes to political or economic competition, whether within countries or between them, the threat of being overpowered gives a kind of futility to many kinds of renunciatory wisdom.
Then, too, that money already mentioned: does it come in part from an unequal society in which some simply do not have the power to arrange their lives more comfortably? Quite possibly. The path I am outlining/recommending/following-with-mixed-success is a middle path of middle class restraint. Societies can make it more or less visible/possible, but given that the American middle class isn’t exactly in great shape it’s not obviously plausible to make it dominant there.
Unlike Alan Jacobs’ anarchism, I suspect the nearest political match to the technological attitude that I am describing would be a bread-and-roses democratic socialism, paired with the kind of recommendation of personal virtue that would have been characteristic of, say, the progressive Christian socialists of the early twentieth century. But if LLMs mean that most people just don’t have the power to get the resources they need, I’m not sure any kind of democratic socialism can really get off the ground. Renunciatory anarchism might be slightly more likely, even! Still, I must admit, I do sort of like the idea of bread and roses and self-improvement for everybody.