Thanks for a thoughtful and insightful view of TDD. I've talked to very few people who actually do it.
But, you hear it coming.
First,. why not just do the right thing without the TDD? All my code is in small, testable units and I never use TDD. I make heavy use of testing.
And I disagree that writing the tests first makes it clear that a design is unusable.
Indeed, a lot of my issue with TDD is that it's impossible to do most forms of top-down design, because there is no practical way to write tests top-down, since you don't have the pieces at the bottom that actually do things.
Imagine you have a design for a great machine made up of lots of little parts. It's perfectly possible for each part to test out perfectly, and yet be unable to put the machine together, because the overall design is wrong.
"Breaking the build or the VPN is down"? Why are these problems at all in 2021?
Don't you have continuous integration? Don't you have a source control system? Oh, apparently you're the only one testing. :-o I feel bad for you.
With Git, broken builds or a bad VPN don't slow me down for a second. I just back up a few commits until I get to a good spot, and start working from there. Once I got over my fear, git rebase turned out not to be such a big deal at all.
And finally, in my experience with TDD-written code I've worked on, paradoxically I always have an issue with the quality of the tests!
Writing the tests in advance has a fatal flaw, IMHO: you can only write black-box tests, tests that don't know about your design.
I have a fundamentally different attitude than TDD practitioners. They are wanting to prove their code right - but I'm trying to break my code, and I can't do my best job at that until I see the code itself.
And yes, sometimes it means that I'll test a non-public API.
Usually that means that there's some area in my code with a lot of complexity, so I don't trust it, so I split it out as a non-public function or method, and write a large number of tests to make sure I've covered all the edge cases.
Another set of tests I write that can only be written after I write the code is testing the error handling.
I remember the first time I saw some huge production job fail, and then the error handling was wrong, so we got no information about the crash. I remember the second and third time I saw that, too.
Also, because I'm designing for re-use, I think it's not good enough to say, "The documentation says you can't do X", particularly if a lot of regular programmers are going to be using the API.
I think it's important for your code to give clear programmer error messages, particularly for easily-made errors.
And the results speak for themselves. [Brag follows. :-D]
I've written about 9000 lines of code in the last four or five months for a new project. Suddenly in the middle of it, I got bad RSI so I had to stop working. (It was from rage typing to antivaxxers, stupid me!)
I'd check in at meetings for the next three weeks, and all the comments were, "We tried X and it worked great. Hey, I have an idea for new features. I didn't know we could do Y, that's really useful!"
And this was with an inept junior programmer using the code, whom we eventually had to let go because he was cheating! (i.e. he'd copy huge chunks of code from another part of the project, whole functions unchanged, and just change variable names in it, and then ignore us when we said, "No, you should re-use the original code to avoid duplication.")
Anyway, I wrote too much about this. You can tell I love testing.
To be blunt, testing is so important that if a specific practice like TDD is what floats your testing boat, go for it!
But I feel it's incomplete for two reasons - it doesn't lead to aggressive, adversarial testing after the fact, and it doesn't allow you to write very high-level sketches that can't yet be tested.
Thanks for letting me rant!