Tom Ritchford
3 min readDec 9, 2023

--

It's funny, I was just re-reading a collection of Clarke short stories, Tales of Ten Worlds, and thinking, "How was it that all of these smart people, like Clarke and Asimov and so many others, were so very very wrong?"

One of the earliest stories in that book is from 1957, which postulated we would have permanent moon colonies starting in the 80s and having children in space in 2000.

The answer, I think, is that for about 25 years, from 1945 to 1969, we saw some fairly fast progress because we were able to quite quickly pick the low-hanging fruit of putting satellites in low-Earth orbit and visiting the Moon.

The trouble is that the next significant steps - staying on the Moon, and visiting Mars - aren't just a few times harder, they're two to three orders of magnitude harder, so much harder that it isn't even certain that they are possible!, and certainly would take many decades to achieve in any case.

Clarke, Asimov, and all the rest could have fairly easily seen this, certainly done the math, and yet they didn't.

It's because when you're in the middle of an exponential growth spurt, it's hard to understand that it might level off comparatively soon, perhaps because the problem is much bigger than you have thought - and the same might well be true of AI.

For example, LLMs generate a series of words from a prompt based on Bayesian-ish probabilities it has extracted from a large corpus of documents that people have written in the past.

The reason this is fairly accurate is because documents on the internet, weighted by popularity, with some human editors checking some small but carefully selected fraction of them, is fairly accurate.

Now, the idea is that inteilligence, which means "understanding" something, getting it right as often as a competent human does, is going to be an emergent property from these LLMs.

But it's absolutely not clear why this should happen!

The only force for correctness here is the editors, but they aren't content specialists and only see a fraction of one percent of the corpus due to the overwhelming size.

More, humans do not appear generate outputs in the same way as AIs do. We start with actual models in our head of the outside world, models where we can interrogate for their behavior to a pretty good degree.

Many of these models do not require any sort of text at all. A cat understands very well that it can only try to steal food when no one is looking.

In the same way, human texts like sentences do not appear (at least from the inside, me being human and all), to be generated as a probabilistic stream from other sentences one has heard and read in the past, but constructed according to ideas and needs of your own that you desire to express.

Finally, I strongly disagree that AI should be embraced. It's going to eat all our lunches, particularly if you are an artist, writer or computer programmer. It's a heist of generations of human work, given to other humans freely, but then captured by a tiny number of incredibly rich and rapacious companies, to be sold back to us.

--

--