If you live in New York, you’ve already seen the provocative ads on the subway for “Friend,” an AI-powered pendant that is meant to provide artificial friendship for our lonely age. If you don’t live in New York, you’ve likely already heard about the controversy behind those ads. That means they worked.
Today we are proud to publish, for the first time, an essay by Ripley Stroud. Ripley is a philosophy PhD candidate at the University of North Carolina, Chapel Hill, where she focuses on the ethics of belief. She is also a sharp writer, and a natural fit as one of the first research fellows at the Aspen Institute’s Philosophy and Society Initiative. Ripley writes about the Friend ads and the Friend machine itself, and asks: will the machine work? Can it become a true friend?
— Santiago Ramos, executive editor
Simon and Garfunkel once sang that “the words of the prophets are written on subway walls.” I’m not so sure about words of the prophets, but it is undeniable that the words written on subway walls — whether advertisement, graffiti, or both — yield deep insight into the current pulse of American culture. That’s why Avi Schiffmann’s $1 million dollar subway campaign (which he claims to be the largest in history) is so striking. Schiffmann is the CEO of Friend, a startup hawking a $130 AI companion. You wear it (rather unsubtly) around your neck, and for the extent of its 12-hour battery life, it listens in on your conversations and texts you with snarky commentary on your life (and, as testers have noted, overly needy fussing).
The ad campaign has two different kinds of posters, both of which are aesthetically sparse: black serif text on a white background. The first kind are written in the voice of the necklace, vowing to do all the things that your woefully embodied friends will not: “I’ll binge the entire series with you”; “I’ll never leave dirty dishes in the sink”; “I’ll never bail on our dinner plans.”
The second kind are a little bit more ontologically daring. Written in the style of a dictionary definition, they read: “Friend [frend], noun. Someone who listens, responds, and supports you.” Placing an image of the necklace to the right of the definition suggests that this necklace is a paradigm example of friendship, much like you might see an etching of Dickens’ Scrooge next to the definition of “miser”.
The campaign has not been met with anything resembling support or positive reception (and interestingly, Schiffman has identified this as precisely the point). The posters are covered in the scribbles of white-knuckled Sharpie; the most frequent mark by far is a giant “X” over “someone”, with “a person” or “a real person” written above it as replacement. Also common is the vitriolic imperative to “get real friends”.
What’s interesting about these responses is that they frame the primary offending feature of this device as pretending to be something it’s not: that it’s an artifice of friendship. The necklace is a fake friend because a necessary feature of a friend is that they are a person.
The discourse playing out on the walls of the MTA mirrors the standard debate about AI research. AI skeptics want to prove why it doesn’t think, why it’s not agential, why it can’t be your friend. AI enthusiasts want to prove the inverse of all those things. The trouble with either side of this coin, however, is that the answer to these questions is ultimately an empirical one, and at that, an empirical question that seems especially difficult to ever answer with certainty.
So let’s take another tack. I want to show you that it doesn’t really matter whether the necklace can be your friend or not. What matters is that the necklace would be an awfully bad friend.
In Book VIII of the Nicomachean Ethics, Aristotle famously distinguishes between the three different kinds of friendship, each of which is organized around a different good: friendships of pleasure, friendships of utility, and friendships of virtue. Perhaps unsurprisingly, he holds up friendships of virtue as the moral ideal, identifying friendships of utility and friendships of pleasure as deficient versions of friendship.
Crucially, this is not to say that friendships unified around pleasure and utility aren’t friendships: they’re just worse versions of the ideal. Friendships that are propped up by serving one another’s instrumental ends just aren’t as rewarding as friendships centered around pursuing the good together. In other words: the colleague you carpool with may very well qualify as your friend, but that friendship is deficient compared to the one you have with the neighbor who does charity work with you. Further, Aristotle is happy to grant that friendships of pleasure and utility can be unequal while still aptly being called “friendships”. This is particularly clear when he includes parent-child relationships — ones where the utility in question only goes in one direction — as friendships.
So, according to Aristotle, friendships can be friendships even if they’re centered primarily around acts of utility, and even if those acts of utility are unidirectional. That makes it seem like the Friend necklace could actually be your friend. It’s just arguably best understood as a friendship of utility, and as a friendship of utility where the necklace exclusively helps you, not you it.
Even if Aristotle’s view delivers the conclusion that the AI necklace could actually be your friend, that same view allows us to at once identify the necklace as a deficient friend — a friend that we certainly shouldn’t want around all the time. It could never replace a friendship of virtue.
And I wager that even if we suppose that the necklace could meet the criteria for being in a virtuous friendship with you (“respond to me as if you are the paragon of virtue!”) there are inexorable reasons why it would still be a bad friend. Perhaps most simply, the AI necklace would be a bad friend for the same reasons we’d take a human friend with the same qualities to be a bad friend. When I think about what I love most deeply about my friends, a substantial portion of what comes to mind are things that are fully distinct from me. I love that when I spend time with them, we watch inane procedural crime dramedies that I cannot myself see any value in. I love that we will go months without properly talking because I feel like that distance itself is a reflection of the soundness of the relationship; the faith that we can be two separate people while still existing in a purposeful parallel. Indeed, I find it exciting and rewarding that my friends have enough going on that they are not always available to me. Insofar as one can arguably look to their friends to learn something about themselves, I end up liking myself better knowing that there are people who see value in me that have such rich and important ground projects.
The Friend necklace cannot be any of those things. The Friend necklace would be ubiquitous, cloying, ever-present, demanding. It would be a bad friend. Bristle all you like at the idea of being friends with a hunk of plastic: what’s really objectionable is the idea of being friends with someone (or something) who cannot possibly be distinct from you.
This is a normative argument against AI friendship: it claims that the AI necklace would be a bad friend, and so we ought not be friends with it. This claim is orthogonal to the descriptive argument being made by the Friend billboard vandalizers, who instead assert that we can not be friends with the necklace. I think AI skeptics would do better to focus on building normative arguments. This is because descriptive arguments will ultimately be contingent on the actual facts about the technology’s prowess. If technology advances to the point where there’s no doubt that AI has the capacity to be your friend, the AI skeptic is left fresh out of arguments. Normativity, on the other hand, is evergreen. Conditions on a good friendship are a perennial matter.
Wisdom of Crowds is a platform challenging premises and understanding first principles on politics and culture. Join us!