Will second the recommendation of Labatut's amazing book. The section postulating the decreased human presence on the earth coupled with a glut of carbon in the atmosphere leading to a fecund jungle of plant growth choking the remains of cities was terrifying, and quite likely to occur.
I cannot wait for AI to go the way of a dot com bubble. Good riddance.
Anyone remember the blockchain or NFT's? They went out with a whimper. The investment in AI will go out with a bang that destroys another generation's wealth... Maybe the boomers will subsidize a recovery if they aren't wiped out.
Doesn't the Chinese Room argument involve two distinct senses of "understand"? To "understand" a subject can mean to be able to hold a conversation about that subject with a biological human which the latter will find satisfying, coherent, illuminating. Clearly AIs can do this. Or it can mean to have the experiences -- joy, anger, bafflement, hilarity -- that the words are intended to evoke. Clearly AIs can't do this -- they have no human emotions because human emotions are dependent on human physiology, which AIs don't have.
For me, this is the way to cut through Searle's (and other philosophers') blather about "meaning." It all depends on what "meaning" means.
Well, you should be more careful. I'm not "straight" at all. I'm a disciple (provisionally, until I understand it better) of Sean Carroll's Romantic Naturalism. You should look it up.
I'm not overly familiar with Searle's Chinese Room, but I think there is a little misunderstanding of the Chinese Room here. It's the distinction between knowing and understanding which is far more pertinent in the experiment. Searle does not deny that machines could perhaps do the former, but that the machine, even if it replicates knowledge convincingly by following rules, can move further towards a deeper understanding of a subject. It's the distinction between reading something in a class and learning it with a teacher, and then applying that knowledge in a new context, independently, and deploying it elsewhere for a new purpose without any support.
Will second the recommendation of Labatut's amazing book. The section postulating the decreased human presence on the earth coupled with a glut of carbon in the atmosphere leading to a fecund jungle of plant growth choking the remains of cities was terrifying, and quite likely to occur.
I cannot wait for AI to go the way of a dot com bubble. Good riddance.
Anyone remember the blockchain or NFT's? They went out with a whimper. The investment in AI will go out with a bang that destroys another generation's wealth... Maybe the boomers will subsidize a recovery if they aren't wiped out.
Doesn't the Chinese Room argument involve two distinct senses of "understand"? To "understand" a subject can mean to be able to hold a conversation about that subject with a biological human which the latter will find satisfying, coherent, illuminating. Clearly AIs can do this. Or it can mean to have the experiences -- joy, anger, bafflement, hilarity -- that the words are intended to evoke. Clearly AIs can't do this -- they have no human emotions because human emotions are dependent on human physiology, which AIs don't have.
For me, this is the way to cut through Searle's (and other philosophers') blather about "meaning." It all depends on what "meaning" means.
George, you think that any philosophy that isn't straight materialism is "blather"
Santi, was that a statement or a question?
statement
Well, you should be more careful. I'm not "straight" at all. I'm a disciple (provisionally, until I understand it better) of Sean Carroll's Romantic Naturalism. You should look it up.
Atoms, void and ... "romance"? Sounds like blather to me ...
Ah well, to each his own blather. That's the glory of Philosophy!
I'm not overly familiar with Searle's Chinese Room, but I think there is a little misunderstanding of the Chinese Room here. It's the distinction between knowing and understanding which is far more pertinent in the experiment. Searle does not deny that machines could perhaps do the former, but that the machine, even if it replicates knowledge convincingly by following rules, can move further towards a deeper understanding of a subject. It's the distinction between reading something in a class and learning it with a teacher, and then applying that knowledge in a new context, independently, and deploying it elsewhere for a new purpose without any support.
It’s a bit weird to lump in lukewarm gpt-5 reviews and Yudkowsky’s book together as “AI skeptics.”
One thinks AI isn’t going anywhere and the other thinks AI will wipe us out within a decade if we continue on our present pace.
true -- the grouping i was going for was "not AI or tech optimist." But that's maybe too large and porous a category.