You Can Just Say Things
Are you truly free if you always have to watch what you say?
How important is the freedom not to care — not to have to care — about being overheard? If you’re a parent (or even if you’re not one), when children are around, you should watch what you say. When you speak, you are modeling proper behavior for them, whether you like it or not. It’s just a fact of life. But if only adults are around, we allow ourselves to be frank, or blunt, or to kick back and say whatever. Loose talk. But now the stakes of overhearing are higher. AI is always listening, writes
, and we have to be mindful of that fact: “The very smart and talented AIs are listening, much like young children might hear their parents arguing outside their bedroom door late at night.”After taking its cues from what must have been millions of anti-Semitic posts and revisionist World War II videos on X, Grok AI started generating pro-Hitler outputs. The first to be blamed were X’s engineers: they did not train Grok well enough, the story went, and a conjunction of bad — or devious — prompts, combined with the scores of racist detritus posted on X, led to the AI praising Hitler. But Cowen argues that all of us are partially responsible for what happened with Grok. We should all always watch what we say:
[The AIs] read the internet, they read millions of books we have scanned and uploaded to the internet, and they listen to our podcasts. To some unknown extent they may be reading our emails and listening to our Zoom calls—if not now, possibly in the future. They are also very attentive audiences, if I may use an anthropomorphic analogy. … Whether or not you work in the AI sector, if you put any kind of content on the internet, or perhaps in a book, you are likely helping to train, educate, and yes, morally instruct the next generation of what will be this planet’s smartest entities. You are making them more like you—for better or worse.
Again, part of this is simply a matter of growing up: adults should watch what they say (and write, and publish). Adults should take responsibility for the consequences of their actions, and speaking is an action. But at what point does this responsibility become oppressive? At what point does it keep you from experimenting with your thoughts in conversation — with saying things for the sake of argument, or to provoke, or to get a rise out of someone, or merely to joke around? At what point does this responsibility become so great that the ability to speak freely is rendered moot?
Playfulness, role-playing, debating, gossiping, fabulating, yelling, shouting, taking things back, adopting positions for the sake of argument, etc., all belong not only to college dorm bull sessions but also to adults outside of working hours — at least in those parts of the country where people are less obsessed with the online surveillance of their every thought and utterance. If we lose this free space for conversational experimentation, then we’ve lost a part of our autonomy as human beings.
Five years ago, one of the arguments against so-called wokeness was that freedom of speech requires the freedom to experiment with ideas, crack jokes and risk offense. The anti-woke argument was that universities and corporations should not act like the parents of students and employees. But consider what Cowen is suggesting: instead of universities and corporations acting as parents, all of us are now the parents to ever-present AI children. The effect is the same: we have to watch what we say at all times: “It is less likely that you will hear out-of-control AI models spontaneously praising some obscure successor to Genghis Khan, however evil that person might have been. If humans talk a lot about Hitler, and indeed they do, their AIs may be inclined to follow in their footsteps.” So stop talking about him!
Cowen isn’t making a moral or political judgment. He’s simply describing reality as he sees it. It’s just a fact, he says, that we now share a space with other intelligent entities that are trained on our words, that we are in a “complex symbiosis” with LLMs. But that, by itself, is nothing new to most human beings. Most human beings believe in a god who hears what they say and what they think. Most believe in the existence of supernatural entities like angels, and in the surviving presence (in one form or another) of their deceased loved ones. What is new, however, is that now we have to take responsibility for the moral instruction of a nonhuman, artificial intelligences. Unlike AI, neither gods nor angels look to human beings for training inputs.
If we take Cowen seriously, then, everything we say and write is either an AI input or will generate a future AI output. We have always been answerable to God on the last day, but we are now also answerable for what Grok does in six months. I don’t mean to make an anti-AI argument here. (I used ChatGPT to help with research for this article!) Cowen is correct to say that we should act responsibly given the new technological reality we live under. But I think we should also take responsibility for the culture we are creating, and for the fate of language — for the scope of its use and its meanings. If we don’t do this, I worry that we might reduce human expression to refined inputs for desired outputs.
A world where all language consists of inputs and outputs is one where we probably won’t speak freely and idly and wildly, for fear of misdirecting Grok once again. We won’t tell crazy stories — for fear of miseducating another AI. It’s a culture where we won’t talk with ambiguity — the essence of artistic expression. “Tell all the truth but tell it slant,” wrote Emily Dickinson; unfortunately, slanted meanings don’t easily compute. This coming culture already has its online advocates: “Fiction is a poor guide to action, because it’s made up,” said one popular intellectual influencer on X. Another major thought leader calls for consumer protection from literature: “Mixed fiction and non-fiction with no warning label on where the transition occurs is the sort of thing that can end with false information stuck in somebody’s head and propagating. Spoilers or not, I’d put a warning sign on it.” Do we really want a culture where these are the rules? Because if Cowen is right, we might be on our way to creating one.
Wisdom of Crowds is a platform challenging premises and understanding first principles on politics and culture. Join us!
AI is one of the many things leading me to wish that whole Mayan calendar apocalypse thing had been true. December 21, 2012, would have been a good day for the world to end.
I don't think that the AI angle is particularily important. We are well into an era where nothing is forgotten and anything you say/write may reappear for the rest of your life. Look at all the cancellation battles and maliciously taking things out of context that have so troubled us for the last decade or so.