As a senior engineer, in the last few years I have had to review other people's code written by GPT.
And it's been miserable.
I really never get the impression people understand the code that they are representing as theirs. And the tests seem to test the working parts and ignore the bugs.
I generally have a standing bet that I can write a new unit test that will find a bug in any non-trivial code review and so far I am undefeated.
I would be far more interested in intelligent code completion that worked at a smaller level of individual lines, but so far all the LLMs I've tried have taken more time to write worse code.
Now, a lot of this is that I've been programming for many decades, but I don't see how less senior people are going to learn how to program by simply editing the output of an LLM, and at this point, the quality is not as good as a human programmer.
And that's for self-contained modules that are entirely new code! Most code is nothing like that, but rather, a small change to an existing codebase.