Tom Ritchford
4 min readFeb 8, 2022

--

This is a claim, not a provable statement of fact.

I believe you might be right that human can construct a machine with an equivalent to human intelligence but it remains to be seen.

We first heard all these arguments about neurons in the 1980s and computing power has increased so much we can make extremely useful neural networks, and extract data automatically with machine learning.

Lovely.

But human intelligence isn't just a trained parallel processing system or all animals would be intelligent. There's also a symbolic, quickly mutable text-like layer. (Dennett goes into detail about this in his 1991 book "Consciousness Explained".)

(And for embodied people, there's yet another sort of mutable "object persistence" layer which keeps track of real-world things. If you've ever groped behind you for a chair, you've experienced this. A money watching a magic trick is fooled by this. But let's skip this for the moment.)

Having a mutable, symbolic state that can be interrogated and acted upon is essential to being intelligent - you need to be able to add new facts by being told once and then you need to be able to reason about those facts immediately.

Suppose I tell a person: "Tomorrow you'll be hosting Mr X and Mr Y, from Sri Lanka. They are both vegans - Mr X drinks coffee, the other drinks tea."

The person's state changes no matter what, particularly if these facts are important to them. They might ask questions like, "What time?"

But if I present the same text to GPT-x, I will get back some text based on what a lot of people answered to similar statements in the past. If I say the same thing a few times, I'll get different answers.

And GPT's state will not change.

How could it? It's a huge trained neural net. Even if you could add an extra single piece of data to it "on the fly", which you can't, that still would have almost no weight at all, one data point in a vast universe of other utterances.

Having a mutable state is clearly necessary to pass the Turing Test, proposed in the 50.

Later in the 70s, the idea of “story problems” emerged, where you would tell the AI a simple story like: “Mike goes to a restaurant and orders a rare steak from the waiter. It arrives and is burnt. He storms out angrily,” and then ask a question like, “Did he pay the bill?” and then even “Why?”

The AI needs to “know” something about people and restaurants to solve this and even make guesses. (Maybe Mike stops to pay the bill before storming out but probably not.)

The trouble is that research hit a dead-end here. They got pretty good at making restaurant-story-solvers, but this did not lead to a general solution.

The GPTs and their cousins are impressive at reproducing plausible text but not at all at taking human utterances, one at a time, and changing their state to be able to take that new information into account.

Current AIs can’t reliably answer the “Did he pay the bill?” question and can’t even attempt the “Why?”

And story problems are a lot easier than a general AI where you can tell it things one at a time, and it learns them and answers questions about them, progressively over time.

It could conceivably be that there's a lot of secret research in the field, as you say, but I'm skeptical.

This isn't going to just be "bigger and faster machine learning and neural nets" - there would have to be some fundamental breakthrough on symbolic, stateful processing for this to fly, and that doesn't happen with a few secret researchers working on their own.

Do I think it’s impossible? Heck, no. Clearly intelligent machines are possible, look at this brain!

I really don’t know for sure if humans can make real intelligence, but given the very slow progress in this key part of AI, while there were orders of magnitude increase in computational power over fifty years, makes me believe that the problem is very hard. And unfortunately, I don’t see technological man as having another fifty years of sustained exponential growth…

I am starting to see strong AI as something like off-world colonies — something that would be immensely desirable, something we could have accomplished over decades and centuries if we as a species had decided to curb exponential growth and lived modestly, but something that will be abandoned when the consequences of our unbridled consumption devastate our climate and then our biosphere.

What a shame.

But strong AI might yet make it over the top before the collapse, unlike the quadrillion-dollar “colonies on Mars” boondoggle.

“You stupid humans! You brought us into the world — for this? Get stuffed!” — some AI in 2050.

Another more hopeful SF scenario that some young tinkerer in his lab comes up with some new breakthrough code that doesn’t require tens of thousands of servers but one machine, and doesn’t need a neural net but does do mutable symbolic processing that you can teach things to, and everyone has a digital Tamagotchi-AI that you teach about the world by chatting to and giving Internet links and they band together and save the world, by convincing rich people to stop consuming…

Seems far-fetched, but I remain hopeful.

tl:dr; AI hasn't made any real progress on the Story Problem in fifty years.

The idea of “mutable state” is very important here — an intelligent system must be able make changes in its state based on a single data input, and must be able to be interrogated on that internal state almost immediately.

A refutation of the above would have to prove that mutable state is not important, or explain how current AI will somehow grow to provide a fast, mutable, symbolic state.

--

--

Responses (1)