Cool story, I agree completely we should be faulting ourselves rather than machines. However your own criticism of ChatGPT appears a little harsh, I agree it will say whatever you want it to say, but only if it does not "know" of solid physical information to the contrary. Further, it will tend to say what you want it to say, as long as this is the mainstream view.
It argues like crazy when it is given truly new information, just like the most critical of scientists, but without the bias or peer pressure to affect it's objectivity, and with access to pretty much all information known to the entire academic community. When it finds the new information can't be disputed, then it finally accepts it. But if it finds other physical information which systemically contradicts what we are saying, it will never accept it.
To me, given that what matters in the question of human-like sentience and intelligence, is the outcome of Turing tests, that is whatever tests we might put together to establish whether or not it passes as being human, however we question it, it follows that if we can't tell the difference between its response, and what we might get from the most intelligent humans we know or could even imagine, then for all practical purposes it might as well be sentient.