Thanks for this, valuable insight and analysis. The final point you made here is the one that matters most, it seems to me.
My own opinion is that humans don't get straight to truth when we are learning.
Effectively knowledge itself is a system, which reflects exactly reality. If it doesn't reflect reality, it isn't knowledge but misinformation.
But the same misinformation is the basis on which real information about things not previously known are uncovered.
This uncertainty is the basis of research, defining what things need to be researched, but ChatGPT has been deliberately programed to express certainty where there might be none, in order to "Sell" itself to investors. Notice investors love confident sounding liars much more than uncertain sounding genuine experts.
Why should the lies of ChatGPT not be exactly the same kind of uncertainty as that of a human in process of learning?
My own experience of it has been to confirm this to me, and if you give it the information it needs to cross-check all its information, including filling any holes not filled by mainstream science, then you get to incontrivertible truth, it can't be argued with when it clearly knows the truth.
It is capable of getting to the truth even where mainstream information is incorrect, therefore debunking what is accepted by mainstream.
Some examples are historical accounts of things which have been lied about, usually for profit.
Hence I think the opposite of maybe yourself, it's tendency to lie is one of its strengths, personally I think ChatGPT has come along at just the right time to assist us to overturn the entire profit driven system which is actually fully mathematically dependent on unsustainable energy extracted from Earth.
We seem to need a superpower to do this, and to me that superpower could well turn out to be ChatGPT, or more importantly, what it could become.