14 Comments
User's avatar
Michael D. Purzycki's avatar

AI is one of the many things leading me to wish that whole Mayan calendar apocalypse thing had been true. December 21, 2012, would have been a good day for the world to end.

Expand full comment
Santiago Ramos's avatar

cmon man, there's still hope!

Expand full comment
Michael D. Purzycki's avatar

Hope that things will get better, or hope that the apocalypse is just around the corner?

Expand full comment
John Michener's avatar

I don't think that the AI angle is particularily important. We are well into an era where nothing is forgotten and anything you say/write may reappear for the rest of your life. Look at all the cancellation battles and maliciously taking things out of context that have so troubled us for the last decade or so.

Expand full comment
Walter Robinson's avatar

Taking the view that we should be careful what we write and say because AI may be copying our thoughts [without compensation btw], may be one of the silliest positions you have ever taken: "Cowen is correct to say that we should act responsibly given the new technological reality we live under." Sorry, but the billionaires behind the code get the profits, and thus the blame. You steal other people's intellectual property, it's on you.

Expand full comment
Santiago Ramos's avatar

That is a good point. I meant that sentence as a broader concession; for example, I also believe we should "act responsibly" vis a vis other technologies, like the nuclear bomb. But you make a very good point about who actually has responsibility here.

Expand full comment
Rickie Elizabeth's avatar

I think what worries me most isn’t just that AI learns from people, but that people becoming too “predictable” for AI. If people habitually take the expected “side” of an argument that splits down party lines, always like the post the algorithms could predict they’d like, always have a negative reaction to the post or person they’re expected to react that way to (based on what accounts each person follows), then they’re too easy to model, and too easy to manipulate.

It’s not only about what AI does with that data in the moment, but what people (or institutions) with access to it can do. Things like nudge users, keep polarizing discourse, or reinforce narratives by reinforcing behavioral grooves/habits. When everyone reacts as expected, it’s really easy for the whole system to become an echo chamber. Even worse, this data can be used to keep people from even realizing they’re in an echo chamber to begin with. They may assume the constant consensus they see means they’re morally and factually right.

While I’d imagine that the algorithms know a lot about me, I also know that there’s a lot that cannot be assumed about me online. For one, I may say things rhetorically, hypothetically or to test things out—I’m not afraid to play devil’s advocate or to take a contrarian view for thr sake of furthering an argument and improving depth of thought. Furthermore, on the majority of digital platforms I’m on, I follow accounts I disagree with. I follow institutions, companies, and organizations I’m critical of to varying degrees, some of which I may like, others I dislike, and many I have a mixed view on. Some people I follow think like me, and put my own thoughts into words so well that I can tell we’re on the same wavelength—but even then, it’s rare that I always agree with them on everything. I also follow and interact with others who disagree with me, some in drastic ways. But I’m by no means in an echo chamber; the algos still try to throw posts at me that differ wildly on the ideological spectrum. I’d rather see all kinds of sides than be stuck with people who agree with me.

I also push AI to disagree with me, or when I think I’ve got a solid idea, I have it do deep research on counterpoints/evidence that challenges my thesis.

All this to say that I hope, if anything, that people watch what they say around AI insofar as they have an awareness of how the insights this technology gleans from their thought processes and emotions can be used against them (and others). Also because I think it’s kind of sad to be predictable to AI/algorithms. Seems wrong to be that easy to categorize.

Expand full comment
Rickie Elizabeth's avatar

Also, not for nothing, the people who say crazy/harmful things that AI picks up on probably do not care that it learns from them. If anything, they probably like that.

But for those who are not easy to predictably categorize and would like to make a more nuanced or ironic joke/statement, idk if I’d hope AI learns to figure it out or not. But I’m not really going to change what I say because of AI, aside from being careful about privacy/cybersecurity.

If I stopped saying anything that I feel AI could use to manipulate us, then I wouldn’t be helping spread ideas to make people think and hopefully help people become harder to manipulate. But I think it’s more important now than ever to resist the urge to become everything our algorithms think we are.

Expand full comment
Charlie Taben's avatar

Thoughtful and alarming - the potential to throttle artistic expression at this early juncture is ominous

Expand full comment
John Wilson's avatar

This isn't an AI problem, this is an internet anonymity problem. People don't police themselves when they are out of sight, without strong character (A quality lacking in most people these days) So the real problem is living life in the digital world. It's not real life, and devoid of the accountability that makes us fully human.

How hard is this equation? AI: Junk in = Junk out

Expand full comment
The Radical Individualist's avatar

Never mind AI's misperceptions, what about our own? Look at all the beliefs that people insist are absolute truth, when their basis for believing is the result of indoctrination.

Is AI dumb enough to believe that Greta Thunberg is an authority on climate change? Millions of humans are that dumb.

Is AI dumb enough to believe all the crap that politicians told us about COVID? Millions of humans are that dumb.

It's been said, many times, "Tell a lie enough times and it becomes the truth." I have always said, "A lie is always a lie forever, but tell it enough times, and AI and stupid people will believe it."

Expand full comment
Rickie Elizabeth's avatar

Your comment frames people as dumb (and I’m not saying that’s not the case… a lot of people are dumb), but I’d point out that in itself fits a common pattern of the “individualist” stance of “I see the truth, others don’t. I think outside the manufactured narratives, but everyone else is dumb and stuck in it.” Problem is, this directs focus away from systems and onto users/people/masses—which fits the same purpose as many kinds of engineered narratives. It can fill a similar function to narratives you disagree with.

Far from being formed in a vacuum, belief & the processes that drive people to see something as true are influenced by many factors, including how algorithms affect visibility/maintain echo chambers, how identity groups enforce consensus and institutions incentivize conformity/conflate identity with ideology. If people aren’t exposed to counter-evidence, or can’t hear it without triggering a defensive /gut reaction (a problem in many groups were ideology is merged with identity), then of course the dominant narratives will hold. Writing people off IS part of the goal; it evades having to face the bigger question of how consensus is manufactured and who benefits from it.

Because the thing is, plenty of “anti-establishment” narratives are just as engineered (and still stuck in memetic scapegoating cycle). These can prove just as harmful to epistemological integrity because they give people the illusion of dissent while keeping them in the same general frame, preventing them from truly thinking outside it. Whether views are mainstream or fringe; branded as collective or individualist; left or right, etc., has no bearing on this—as what’s important is how you got your beliefs, how often you question them/how willing you are to reassess them when new evidence emerges, and how tightly they’re fused with your identity (to find out, one can ask themselves if a counterpoint or challenge to their views feels like as an attack on something deeper about them. How one replies to dissent/counter argument is revealing, as I’m sure you’ve picked up on).

AI may be trained on us, but it can also continue “training” us to accept a limited set of evidence or false equivalence as sufficient for determining truth. Especially given that it’s not immune to the same interests that benefit from algorithmic targeting and creating opposing camps that argue endlessly with each other while making people intolerant to questioning their own beliefs.

Anyway would like to hear your thoughts/observations on this.

Expand full comment
The Radical Individualist's avatar

You've off of my point a bit. I am a former science teacher and am very fond of the scientific method. This method attempts to make rational observations of the reality around us. What is the cause, what is the effect, etc. The scientific method is not there to determine who is good and who is bad. It is not there to pick winners or losers, or to endorse any particular point of view.

The scientific method doers not preclude people making whatever choices, and believing whatever they want. I believe in God, based on very little science. God just makes sense to me. Here's what matters: If I tell you that you must believe in God because I do, or that you must see him in the same way that I do, or if I attempt to get laws passed that give me dominion over people who don't see God as I do, then we both agree I'm a horrible person. It's not my belief in God that is the problem, it is my belief that I have some inherent right to dominate and coerce others.

Others have 'interesting' views on climate change. They can go ahead and have them. But Thunburg and her believers want to force that crap on all of us. THAT is my objection. The COVID pandemic is a testament to the danger of totalitarian thinking. No room for discussion, no consideration of possibilities, just absolutist pronouncements that we must all confirm to the will of some very stupid people.

When I say 'Individualist', that's what I'm getting at. It's not about how right or wrong anyone is, or what they believe. It's about not forcing those beliefs on others.

Expand full comment
Linda Notelovitz's avatar

So, what if I say something as a means to comprehend a concept during a lovely hot shower in my own bathroom at home. (Oh…and I live alone ..) or what if I think it - therefore it doesn’t escape the clutter of my insufferably busy mind? Would this be considered irresponsible, seeing as I am an adult.

Expand full comment