Well, I don't know that it is a foregone conclusion that AI will actually deliver on its promises.
Yes, I do think that fully automated driving is inevitable, though probably not as soon as most people think, because it's a very self-contained world with very clear, and more-or-less objective objective functions which can be automatically computed in a simulation.
But it might be very hard to improve LLMs from "often accurate" to "almost always accurate".
In conventional software, if you have a specific behavior you don't like, you change it by changing the human-readable program code that is the software.
At the highest level, LLMs are composed of two very different parts: there's the underlying base model, which is a huge matrix of numbers which is the result of training an AI model on a lot of data using a lot of CPU, and then a lot of "cleanup" code to modify incoming prompts to the base model and results coming out of it.
The hard-to-fix issue is that there is no way to modify the base model to make incremental changes: you have to go through the whole very expensive training process again while tweaking a lot of settings and the incoming data, and there's no guarantee at all that you will actually fix the issue, or not break something else.
And all the "cleanup" code in the world can't fix a broken base model.
In fact, humans can permanently learn from single inputs. If I'm talking to you and then I unexpectedly find out that you grew up in Indonesia, because I speak some Indonesian, I will instantly learn this from the one input. If I next run into you years later, this might be the only thing I remember of you.
LLMs don't learn that way at all. They give the illusion of remembering during a session because the previous inputs are passed to the LLM as well as the current one, but once that session is over, it's all gone.
If you push together all the textual utterances that humans have ever made, and are able to compute the "most average" response to any prompt, you get something that seems pretty human, and is fairly accurate.
AI advocates believe that correctness and internal symbol manipulation ("thoughts") will magically appear from this, like Athena springing from the head of Zeus. Perhaps this will actually happen.
But I am skeptical.