If what I said was a mistake, then you are claiming that GPT-3 "wants" things and has intention - that the original writer's claim that GPT-3 "wants peace" is true.
This is an incredibly strong statement to make, and you give no justification whatsoever except, "How can we possibly know anything about anything? Maybe the lightbulb wants to be turned on! How can anyone know?!??!?!"
In fact, we do a perfectly good job with real world systems like other people, because we can interrogate these systems in a large number of ways and get a picture of their current state, and because they will volunteer information about their current state.
Honestly, if you've spent some time with GPT-3 and have come away with the impression that it has intentions, feelings and wants, even though it doesn't even have any internal mental state that can change!, it might not be that I can reach you in any way.
Half an hour with GPT-3 should convince almost anyone that it's just Mad Libs and nothing else. Have you spent that half an hour?
Regardless, you are making an extremely strong claim with no proof except "How can we possibly know?"
Pending some actual argument, I remain convinced that GPT-3 does not have desires, wants or intentions, though of course this does not rule out some further program in the future having these things.