Tom Ritchford
Nov 5, 2022

--

Until AI's actually have some sort of "mental model" of the execution state of programs, they simply won't be able to write correct programs of any complexity.

I've taught many students in programming who wrote code this way, some very smart. However, writing code that looks like real code, and fiddling with it somewhat randomly until it works, simply doesn't scale. I call it "faking your way through". You have to be smart to even attempt it, as I would always tell my students, but it's a dead end.

Now, I don't mean we have program in some sort of mental model - maybe it's "emergent" from some neural network. But until we get an AI that can "reason" about the program state, it will always try to fake it.

--

--

No responses yet