Tom Ritchford
1 min readFeb 21, 2024

--

Two very good examples.

The "autopilot" in airplanes is nothing like an AI. It has a set of extremely carefully written algorithms, each of which is contains the distilled knowledge of a century of air travel and of countless carefully analyzed disasters.

Each time there is a new disaster, which are increasingly rare over time, agencies move heaven and earth to find out exactly what went wrong, and if necessary, carefully tweak the autopilot software to mitigate the problem.

And that's why we trust them. All of the above would be completely impossible with AI.

Self-driving cars, on the other hand, have an opaque algorithm that's a blob of numbers, which cannot in any way be tweaked or adjusted - you have to retrain, and you have no guarantee that you'll fix your problem, or that you won't make other problems worse.

Which is why we as a society do not trust self-driving cars. Most areas simply do not allow them. Except for Tesla, which systematically lies about pretty well everything, no company even sells full self-driving cars for that reason: they call them "driver's assistants" or the like.

--

--

No responses yet