Strong, nuanced article, accept claps!
To be honest, I lost interest in using LLMs even for boilerplate. The quality was poor, I spend very little time doing such code anyway, and I appreciate the chance to rethink even basic things because I am trying to be constantly improving.
It's like scales to an instrumentalist. If you cannot play a simple scale and sound good at it, then you aren't a real instrumentalist; if you don't enjoy playing scales, just for the sheer pleasure in producing a smooth and beautiful sound, then you probably won't become a real instrumentalist.
95% of my time is not writing new code, but making improvements to existing code. I can do that quickly and accurately because I have a model of the program I am working on in my head.
Until AIs learn to create such an internal model of programs, one that can be accurately interrogated and manipulated, there won't really be AI programming. And no, I don't believe a big statistical model like LLMs is the key.