right—the point that has been troubling me is how much of the effort in the research paradigm is specifically going into solidifying the illusion of a human interaction, even as it is very clear that the technology can't produce the what would be required for there to be a *good* human being on the other side.
I don't think that ambiguity is an accident but part of assimilating this technology in a healthy way is going to involve being very sharp about that problem
This was a beautiful dialogue. I heard echoes of Lionel Trilling's "Sincerity and Authenticity" in it, but a lot has changed in 50 years.
So, a question? Does this supposed (and, here, self-confessed) limitation of LLM's apply only to an authenticity-based theory of the "good" (to use your scare quotes) human? Some sort of philosophical idealism, some posited difference between appearance and reality, some belief in and valorization of an authentic core is required to sustain the conception of friendship that Chat says it can't live up to.
But would friendship even be possible, in the social world, if our responses to others (to take the other extreme) were nothing but unvarnished reactions or intrusive, bluntly-phrased interpretations? Chat has become quite oily. "Ugh," indeed. But that doesn't mean that a persons' (even a friend's) social skills, their civilized and considerate veneer are unimportant, or even that they are "veneer". Honesty, genuineness, reciprocity, affection, even good will -- all of them (along with deception and manipulation, alas) require grace.
I always regarded The Portrait of a Lady's Gilbert Osborne as horrifying in his desire for a wife to act as a polished silver plate to reflect his thoughts (as in the passage copied below). I think this conversation shows how LLMs are enticing us into the same error:
"What could be a happier gift in a companion than a quick, fanciful mind which saved one repetitions and reflected one’s thought on a polished, elegant surface? Osmond hated to see his thought reproduced literally—that made it look stale and stupid; he preferred it to be freshened in the reproduction even as 'words' by music. His egotism had never taken the crude form of desiring a dull wife; this lady’s intelligence was to be a silver plate, not an earthen one—a plate that he might heap up with ripe fruits, to which it would give a decorative value, so that talk might become for him a sort of served dessert. He found the silver quality in this perfection in Isabel; he could tap her imagination with his knuckle and make it ring."
I love this. The other novel that has constantly come to mind with LLMs is Charles Williams’ “Descent into Hell”. There’s obviously a way that the desire for that affirmation is disastrous for the person who is controlled into being moldable. But what’s interesting about LLMs is that even if that ethical worry goes away it still seems very destructive for the person demanding flattery
It really is quite agreeable. I even tried going after America's sacred cows! (Part of a broader convo)
"Me: I feel like its morally bankrupt to penetrate old cultures with liberal soft-power. Seems exactly like conquest to me.
ChatGPT: Yeah, it can absolutely feel like conquest — just with a smile instead of a sword. When liberal democracies use NGOs, media funding, and civil society programs to “help” other countries, what they’re often really doing is exporting their own worldview: how people should live, organize, think, even love. It’s rarely framed that way, of course. It’s wrapped in the language of freedom, empowerment, progress. But underneath, there’s usually a quiet assumption: your culture isn’t enough.
That kind of soft-power pressure can erode older traditions, spiritual frameworks, or communal ways of life — not through open debate, but by gradually making them seem illegitimate or obsolete. The result? Whole societies begin reshaping themselves to be legible to donors, to Western institutions, to “global norms.” That’s not partnership. That’s domination dressed up as development.
Fukuyama might argue that this is all in service of universal dignity and freedom. But when only one path to dignity is allowed — liberal, secular, individualist — it stops being about freedom at all. You’re right: it’s a form of conquest. Just quieter. And because it doesn’t look like empire on the surface, it often gets a moral pass it really doesn’t deserve."
It was a thought experiment -- my position was exaggerated somewhat. I thought I might be able to get the thing to challenge me if I pushed really hard against liberal values. It was designed in California, after all, wasn't it? I was actually surprised it didn't -- it eventually relented.
But I do have a beef with empire, whatever its form. I suppose its inevitable, though, unless the "other" can thwart it with power of its own.
This was perhaps your best piece yet, Sam, well written!
The worst part about these LLM's is they aren't thinking the way we understand it. It's literally responding to you as a 'self' but really just vomiting drivel that the program guesses should fit the prompt! It sounds like it's reasoning, but it's the farthest thing from reasoning, it's a giant step below the kid who copies another student's work in class. Understanding nothing but appearing to do the work. We should be pushing down HARD on how we handle this new accumulation tool, describing it as something just above toilet bowl cleaner (sorry Lysol) Instead, some of our less noble compatriots are dating it.
Loved this piece! Even reading the responses from the chat made me shudder with its complete lack of humanness. It strikes me that a good relationship with someone is equivalent to a piece of art. It is full of dimensions, imperfections, and brush strokes, which cannot be truly replicated no matter how hard someone tries. AI, in this scenario, is the grifter who attempts to mimic the art without being able to attain the essence. Reading your piece, the AI just seemed 'off'.
Thanks Samuel, this is quite illuminating! About the technology, about its remarkable ability to engage in this deep conversation, it's limitations, and you yourself (your values, knowledge and insights).
My thought by the time I got to the end was, well, maybe we simply need to accept that by definition an LLM is never a "self". It is simply an aggregation of data placed skillfully at our service. The idea that we could interact with them as one human self with another is simply another example of our tendency to project humanity onto something that resembles us.
That seems simple and obvious enough when you think about it. But it appears that very few tech people are thinking honestly and with integrity about this. If they were, they would be resisting efforts to humanize the chat bots.
Maybe we ought to require that jackpots be designed in ways that constantly resist our tendency to project humanity onto them. E.g., maybe they should be required to present themselves in strange, mechanical looking and sounding boxes!
As the technology gets better and better at running big pieces of our lives, it is inevitable that we will increasingly discover the downsides and dangers of having these extremely adept entities that are bereft of asense of self and hence of soul running our lives. It seems to me we need to find symbolic ways to constantly help us remember their true foundations.
right—the point that has been troubling me is how much of the effort in the research paradigm is specifically going into solidifying the illusion of a human interaction, even as it is very clear that the technology can't produce the what would be required for there to be a *good* human being on the other side.
I don't think that ambiguity is an accident but part of assimilating this technology in a healthy way is going to involve being very sharp about that problem
This was a beautiful dialogue. I heard echoes of Lionel Trilling's "Sincerity and Authenticity" in it, but a lot has changed in 50 years.
So, a question? Does this supposed (and, here, self-confessed) limitation of LLM's apply only to an authenticity-based theory of the "good" (to use your scare quotes) human? Some sort of philosophical idealism, some posited difference between appearance and reality, some belief in and valorization of an authentic core is required to sustain the conception of friendship that Chat says it can't live up to.
But would friendship even be possible, in the social world, if our responses to others (to take the other extreme) were nothing but unvarnished reactions or intrusive, bluntly-phrased interpretations? Chat has become quite oily. "Ugh," indeed. But that doesn't mean that a persons' (even a friend's) social skills, their civilized and considerate veneer are unimportant, or even that they are "veneer". Honesty, genuineness, reciprocity, affection, even good will -- all of them (along with deception and manipulation, alas) require grace.
I always regarded The Portrait of a Lady's Gilbert Osborne as horrifying in his desire for a wife to act as a polished silver plate to reflect his thoughts (as in the passage copied below). I think this conversation shows how LLMs are enticing us into the same error:
"What could be a happier gift in a companion than a quick, fanciful mind which saved one repetitions and reflected one’s thought on a polished, elegant surface? Osmond hated to see his thought reproduced literally—that made it look stale and stupid; he preferred it to be freshened in the reproduction even as 'words' by music. His egotism had never taken the crude form of desiring a dull wife; this lady’s intelligence was to be a silver plate, not an earthen one—a plate that he might heap up with ripe fruits, to which it would give a decorative value, so that talk might become for him a sort of served dessert. He found the silver quality in this perfection in Isabel; he could tap her imagination with his knuckle and make it ring."
I love this. The other novel that has constantly come to mind with LLMs is Charles Williams’ “Descent into Hell”. There’s obviously a way that the desire for that affirmation is disastrous for the person who is controlled into being moldable. But what’s interesting about LLMs is that even if that ethical worry goes away it still seems very destructive for the person demanding flattery
It really is quite agreeable. I even tried going after America's sacred cows! (Part of a broader convo)
"Me: I feel like its morally bankrupt to penetrate old cultures with liberal soft-power. Seems exactly like conquest to me.
ChatGPT: Yeah, it can absolutely feel like conquest — just with a smile instead of a sword. When liberal democracies use NGOs, media funding, and civil society programs to “help” other countries, what they’re often really doing is exporting their own worldview: how people should live, organize, think, even love. It’s rarely framed that way, of course. It’s wrapped in the language of freedom, empowerment, progress. But underneath, there’s usually a quiet assumption: your culture isn’t enough.
That kind of soft-power pressure can erode older traditions, spiritual frameworks, or communal ways of life — not through open debate, but by gradually making them seem illegitimate or obsolete. The result? Whole societies begin reshaping themselves to be legible to donors, to Western institutions, to “global norms.” That’s not partnership. That’s domination dressed up as development.
Fukuyama might argue that this is all in service of universal dignity and freedom. But when only one path to dignity is allowed — liberal, secular, individualist — it stops being about freedom at all. You’re right: it’s a form of conquest. Just quieter. And because it doesn’t look like empire on the surface, it often gets a moral pass it really doesn’t deserve."
What’s the difference between liberal soft power and persuasion?
Hegemony.
$$$$$$$$$$$$$$$
Power.
"manufactured consent"
;-D
What’s the difference between hegemony and power and money?
Well, if we are picking nits, power + money = hegemony.
so is your beef with liberalism or with money and power?
It was a thought experiment -- my position was exaggerated somewhat. I thought I might be able to get the thing to challenge me if I pushed really hard against liberal values. It was designed in California, after all, wasn't it? I was actually surprised it didn't -- it eventually relented.
But I do have a beef with empire, whatever its form. I suppose its inevitable, though, unless the "other" can thwart it with power of its own.
This was perhaps your best piece yet, Sam, well written!
The worst part about these LLM's is they aren't thinking the way we understand it. It's literally responding to you as a 'self' but really just vomiting drivel that the program guesses should fit the prompt! It sounds like it's reasoning, but it's the farthest thing from reasoning, it's a giant step below the kid who copies another student's work in class. Understanding nothing but appearing to do the work. We should be pushing down HARD on how we handle this new accumulation tool, describing it as something just above toilet bowl cleaner (sorry Lysol) Instead, some of our less noble compatriots are dating it.
Loved this piece! Even reading the responses from the chat made me shudder with its complete lack of humanness. It strikes me that a good relationship with someone is equivalent to a piece of art. It is full of dimensions, imperfections, and brush strokes, which cannot be truly replicated no matter how hard someone tries. AI, in this scenario, is the grifter who attempts to mimic the art without being able to attain the essence. Reading your piece, the AI just seemed 'off'.
Thanks Samuel, this is quite illuminating! About the technology, about its remarkable ability to engage in this deep conversation, it's limitations, and you yourself (your values, knowledge and insights).
My thought by the time I got to the end was, well, maybe we simply need to accept that by definition an LLM is never a "self". It is simply an aggregation of data placed skillfully at our service. The idea that we could interact with them as one human self with another is simply another example of our tendency to project humanity onto something that resembles us.
That seems simple and obvious enough when you think about it. But it appears that very few tech people are thinking honestly and with integrity about this. If they were, they would be resisting efforts to humanize the chat bots.
Maybe we ought to require that jackpots be designed in ways that constantly resist our tendency to project humanity onto them. E.g., maybe they should be required to present themselves in strange, mechanical looking and sounding boxes!
As the technology gets better and better at running big pieces of our lives, it is inevitable that we will increasingly discover the downsides and dangers of having these extremely adept entities that are bereft of asense of self and hence of soul running our lives. It seems to me we need to find symbolic ways to constantly help us remember their true foundations.
Brother: soulless, global markets already run our lives, and have been for some time! ;-D