7 Comments
User's avatar
Grant Castillou's avatar

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

John Wilson's avatar

I apologize for the following rant, but my patience on this LLM nonsense has run thin. I have to say, most of our AI 'experts' seem to be tangled up enough in AI's boom trajectory to make their theories suspicious. Damir's skepticism is well placed, and Henry is anything but unbiased. For someone who abstains from meat, I wonder when was the last time he touched grass.

Touching grass is something no AI can do, even in robotics. I just don't see the technology getting us there, and if it did, to what end? But I'd like to see a conversation about intelligence that actually does get us to what humans can do, since that's what's being sold: "look how you humans will be replaced!" This conversation was a doozey though: Of course an AI can't tell time anymore than a person who blacked out or stuck indoors without a good clock can (I laughed out loud at Henry's horrible analogies, no shit a person who blacks out can't tell time, that's nothing to do with anything when we're saying AI's can't contemplate time! Quick give it a time stamp!)

Now to be fair, here's what AI can do from where I stand: It's a glorified search engine with statistics. LLM's can't do anything but mimic and guess at their data sets. If they make something up, it's about as creative as my toddler, who is still learning how the world works, so not that impressive. Now in medicine and other sciences, sure AI becomes a massive analytics tool. It's just iteratively looping information it's fed, looking for patterns. Big whoop. Can it come up with a new hypothesis? No. It just takes our guidance and does things faster with better record keeping and recall than any human with a chart could (that's why we invented another tool... heard of MS Excel?)

This mostly seemed like a conversation where we never defined terms, so we didn't get very far. But I'd have loved to hear how a brain in a box will ever reach the level of sentient experience, purpose and grandeur that a human does simply living in a real world as an embodied creation. Alas, nothing from the philosophers on that front.

I cannot wait for this bubble to pop, and then we'll shed real tears as we tumble into another recession. Well not all of us, the CEO's will be alright (and probably the pundits too).

Samuel Kimbriel's avatar

So I agree that there is a huge amount of overstatement in this entire discussion but I do want to push two things:

1. there are genuinely open empirical questions about where this tech is and is going. I am also a skeptic when it comes to many of the more extravagant scenarios, but there just are open issues about what is actually tangibly possible on this trajectory.

2. on the other hand, and more importantly for me, the issue we are encountering has to do with holes not in our technological assessment but our anthropology. At a basic level the tech is revealing the emptiness of so many of our social habits that don't really know what we value about humans or why we should at all. The question I think is is this going to push us to get more ambitious on that score or remain v sloppy with significantly larger consequences even than social media (e.g., teen suicides)

Sam Mace's avatar

I think this second point is especially important here. If we look at AI as a mirror image of ourselves and discover the gaps in our current form of being then this could even be a very positive development- akin to peering over the cliff edge and stepping away from the ledge once we see the perilous depths.

What makes me worried are those who seek to merely have us jump over the edge with it!

John Wilson's avatar

Thank you Sam

1. I think this point bleeds into your second, because what is possible is just recreating so much of the emptiness of our social habits, but I humbly ask ANYONE to tell me how this technology is being used advantageously. Not to shoot them down! I am really trying to be charitable and learn this... but so far I'm constantly confused by the answers and from them I struggle to imagine how it will invade my own line of work which revolves around managing projects that are highly nuanced, risk averse and complex (that's not bragging, it's everything AI sucks at.)

I mean from what I understand again, as a white collar worker being threatened by this on Bloomberg all day... this is just a glorified search engine that processes data it's fed for results e.g. the AI answer in Google search. OR it's a statistics game using a ton of data and weighting to iterate answers or results in a 'dialogue' format. I've seen one guy use it to summarize balance sheets on a company he wants to buy... but boy I bet he still got an Accountant and lawyer involved to make sure there were no hallucinations.

As for Where can it go? About as far as a powerful spreadsheet, what are the examples of more? Everywhere we look it's all unsubstantiated!

Henry's answers to these lay experiences assumed some weird priors ("Consciousness = special!" or "Sentience doesn't require embodiment" gnostic much?) or he didn't actually give good counterpoints around these critiques, 'your using search wrong'...

2. Precisely this 110%, our anthropology is the key to this discussion and critical for living lives that aren't just wasted consuming everything the industrial revolution has given us. This is all connected to answering the question "what is the good life?" and it frustrates me that not a single tech CEO actually answers that question. Well... to be fair, in practice they do. It's a big valuation, a hearty disregard for the worker and getting a bigger yacht...

Sam Mace's avatar

Thanks for this fascinating conversation, as always. On the first question, I guess about how we measure the utility of AI. I found the coding part of the claim quite interesting, as my friend, who works as a coder, said AI code is only really useful if you don't know how to code. If you do, then suddenly you find the code to be pretty terrible and likely to contain multiple potentially serious errors. This mirrors not only what your guest says, to some degree, but also my own experiences when people have used it for writing or philosophy, and I've generally been able to pick it apart after 5-10 minutes of examination.

I would agree with Sam on the undergraduate claim as well. In fact, I think Sam's being generous, although perhaps you use paid models, whereas I refuse to do that. I don't really buy the explanation from the guest that just as with Google, if you ask it bad questions, then you'll get bad answers. That answer would be fine if the guest were saying AI currently is little more than a glorified search engine, but the guest was not. If AI is really that powerful, then it should be able to override a fairly basic attempt to misinform it and challenge it. Instead, it merely plays along, corresponding with my devious attempts to break it down.

Unlike people, who can exercise serious judgement and criticality, the AI that I've played with can summon significant amounts of information and even organise it, but it cannot access higher-level critical skills outside of mere description. It has never told me that I am wrong about something, even when I know that I am. I can also see this from student essays that I have marked, even recently, where the 'AI voice' is so obvious because it simply lacks a critical assessment outside of a basic 1,2,3 analysis, which lacks any serious originality. It's imbibing and repeating what it has already found, which is fine, I guess, but certainly does not make me feel like I should be in awe of the power of AI.

I agree with the guest that perhaps creativity and originality are poorly defined terms with little consensus as to what constitutes them. But, I would posit, that we can survey these terms experientially to some degree at least. As to the question of whether we are merely machines of association, I think I would disagree with this claim as well. If we were, this would imply the limits of human capacity are situated within comparing and contrasting the available data in our heads, i.e., where, from my experience, AI is situated. But clearly, we are capable of more than this.

From my limited understanding, I just don't see how AI, unlike the factories of the industrial revolution, can ever make a profit and become viable entities. The capital that OpenAI is eating up and demanding in the future is beyond anything they're likely to make. This strikes me as a fundamental scalability problem for the technology, which was not discussed. I also think this links in to what Damir said towards the end about how we conceptualise AI, which is vitally important moving forward as to what it will eventually become to some degree at least. In this way, I disagree with Hawking in that I don't think AI will be the worst or best thing to happen to humanity. I think it will become a useful tool which is not capable of the feats that feed our very worst or very best imaginings.

On the 'control' aspect of it, we can surely always just unplug the AI and turn it off? Even if we are blackmailed or abused, we could surely still destroy it if we so chose to do that. Key weapons systems, such as Nukes, are air gapped and kept off grid for precisely this type of doomsday scenario that you're talking about. So, it strikes me, a 'sky net type of terminator scenario is highly improbable just because we remain in control of where we want to distribute this type of system. It would strike me that anyone with a bit of sense would find AI to be too dangerous to be let loose on weapons systems or highly sensitive and important data points and systems, such as the electric grid or the stock exchange, because even in a highly remote chance of failure, the consequences would be catastrophic. This also limits the potential collapsing scenery scenario, too.

John Wilson's avatar

This was a far more respectful and thoughtful reply to what is obviously (to me) a market bubble, wrought by greedy capitalist pigs and snake oil salesmen. Damir is generous in saying his lay understanding may be limited. But sometimes it takes a yokel to call a lie a lie, and AI is predominately built on them. But it sure sounds fancy when we say our terms are loosely defined, so it must be intelligent.