"Just" does an awful lot of work there.
How many cases to test are there, exactly, for a self-driving car? Millions? Billions? More?
User Acceptance Tests have dozens or conceivably hundreds of tests. More, in classic software, if one test breaks, you can simply fix that single problem: but in an AI system, there is no way to tweak the underlying LLM or other trained model - if there's a problem in the model, you need to retrain, and you have no guarantees about what will happen next.
Do I think such AI systems are possible? Yes, they are possible, but that doesn't guarantee they are certain.
But what is certain is that unless society takes control of these AI programs from the 0.001% richest humans, then the rest of us are going to out in the cold if it works, because who will hire a human to do a job which a program can do much cheaper and never do inconvenient things like get sick or unionize?