One way to pump the brakes on the AI hype cycle may be to begin ignoring pundits who use the term 'AGI'. There's no agreement as to what AGI is, so evaluations of progress towards AGI or assessments of whether it is near or far away are, as far as I can tell, largely nonsense. What counts are the capabilities of actual (and fallible) AI systems vs those of actual (and fallible) humans. On that front, there is much that it interesting and important to report.
Just as a note, exactly a year ago, Blili-Hamelin and 15 other "AI researchers", put out a paper called "stop treating AGI as the north-star goal of AI research. I don't necessarily endorse all the arguments in the paper, and I can't vouch for the qualifications of the authors (I try to keep in mind that even anti-AGI writers are benefitting from the hype around it), but I couldn't agree more with the sentiments in the title. ( Link: https://arxiv.org/pdf/2502.03689 )
Tom, just one link to AGI in there, and it's interesting not because of claims about AGI, but about the limits of LLMs and how those point to them not being close to anything close to "conscious".
I should have been clearer -- we need to ignore the "AGI is coming (or not)" hype, which means we need to ignore the "limits to LLMs" hype -- because that's what it is. Gary Marcus may not be in the same league as Sam Altman, but he's a shill nonetheless: a "cognitive scientist" with no time for science.
We don't (and almost always can't) know the limits of either scientific exploration or technological achievement. Just as seriously, there is no generally-accepted account of "consciousness" (any more than there is of intelligence) against which to measure LLM progress or accomplishment. What we get from the skeptics is ta kind of mystical holism: the cherry-picking of some supposedly-unassailable feature of human mentality and a concomitant refusal to decompose that feature into the operational components (such as predicting the next word) at which AI development takes place.
I can't predict that AI scientists will isolate and develop enough such components or combine them in such a way as to replicate human intelligence from the ground up. I also can't predict that they won't and I definitely can't prove that it's impossible. What I say with confidence is that AI is important and will be transformational. That's where the focus needs to be.
One way to pump the brakes on the AI hype cycle may be to begin ignoring pundits who use the term 'AGI'. There's no agreement as to what AGI is, so evaluations of progress towards AGI or assessments of whether it is near or far away are, as far as I can tell, largely nonsense. What counts are the capabilities of actual (and fallible) AI systems vs those of actual (and fallible) humans. On that front, there is much that it interesting and important to report.
Just as a note, exactly a year ago, Blili-Hamelin and 15 other "AI researchers", put out a paper called "stop treating AGI as the north-star goal of AI research. I don't necessarily endorse all the arguments in the paper, and I can't vouch for the qualifications of the authors (I try to keep in mind that even anti-AGI writers are benefitting from the hype around it), but I couldn't agree more with the sentiments in the title. ( Link: https://arxiv.org/pdf/2502.03689 )
Tom, just one link to AGI in there, and it's interesting not because of claims about AGI, but about the limits of LLMs and how those point to them not being close to anything close to "conscious".
I should have been clearer -- we need to ignore the "AGI is coming (or not)" hype, which means we need to ignore the "limits to LLMs" hype -- because that's what it is. Gary Marcus may not be in the same league as Sam Altman, but he's a shill nonetheless: a "cognitive scientist" with no time for science.
We don't (and almost always can't) know the limits of either scientific exploration or technological achievement. Just as seriously, there is no generally-accepted account of "consciousness" (any more than there is of intelligence) against which to measure LLM progress or accomplishment. What we get from the skeptics is ta kind of mystical holism: the cherry-picking of some supposedly-unassailable feature of human mentality and a concomitant refusal to decompose that feature into the operational components (such as predicting the next word) at which AI development takes place.
I can't predict that AI scientists will isolate and develop enough such components or combine them in such a way as to replicate human intelligence from the ground up. I also can't predict that they won't and I definitely can't prove that it's impossible. What I say with confidence is that AI is important and will be transformational. That's where the focus needs to be.
Your link to Jason Willick is broken. The other links were very interesting, thank you for some food for thought.
Fixed!