If I am understanding this correctly, what you mean is that ChatGPT can be convinced to agree with something that might not be true, if it's database of information which systemically conflicts with what is being asserted, is not large enough or not comprehensive enough to withstand all arguments of the persistent user.
If so, it confirms what we might deduce about the effect of propaganda on either a human or ChatGPT.
ChatGPT by this reasoning should be much more resistant to propaganda than most humans, by having a significantly larger database, and the resistance of most intelligent humans to propaganda should be set similarly.
Does ideology mess with this?
Could Ai be prone to ideology?