Tom Ritchford
1 min readJan 8, 2022

--

I was initially entertained by GPT-3, but quickly you realize it's just a toy.

I don't see why this would be any different.

The big issue is that the neural network is immutable. There's no way to correct any mistakes without running the model again with improved data, which is not at all guaranteed to fix them.

There is absolutely no way for the neural network to "look at the information it has" and "figure out which statements are true and false". Indeed, since there is no concept of logic or consistency in a neural network, the system can assert X in answer to one question and not X elsewhere.

And you can't have a useful conversation with a system that has no memory of what you have said.

An intelligent system would be able to learn from conversation with individuals. It would have internal states, and introspection into those internal states.

Even our two dogs here have these features, and even as dogs go, our dogs aren't very bright.

I feel this direction is a dead end. I see no path from these increasingly huge static neural networks to a system with a dynamic state.

--

--

No responses yet