It is very interesting to hear this opinion of LLMs Cory, thanks for posting.
Sorry to hear the experience with the airline, which presumably began without LLMs being involved, assuming that the verbal contract that was thought to exist really was with the airline, not itself with an LLM representing the airline... (Does it occur that it could have been, and actually this is a get-out clause for any business now?)
Anyone using LLMs now, and let's face it, we all are, whether we like it or not, now have a universal get-out clause, "The Ai did it".
The company or individual that chooses not to use LLMs stands to get beaten up by the competition all using it.
Notice what we are feeling as result is just an extension of the general loss of trust we were already feeling and seeing happening.
Personally I lost trust in all things profit driven quite a long time ago. I think maybe 2008. That was when I found out about money-as-debt, and realised the whole banking and money system had turned into a Ponzi resembling a modern version of Roman style slavery. I know I wasn't alone because Satoshi Nakamoto suddenly became world infamous for declaring his own mistrust of people, specifically banks, even creating a whole new alternative currency and financial paradigm, based on his view of what we needed, to replace the human systems we'd lost trust in.
He called this future society "Trustless", meaning we would no longer need to trust anyone, we could have a society which would not need trust, which saw as an issue.
Shame he doesn't seem to be around to be experiencing it, but we are obviously not there yet, if that is where we are headed, otherwise the experience at the hands of the airline would never happened, they would never have been trusted to make the agreement.
What the LLMs are actually doing then, is destroying the last vestiges of trust we might have had, in other humans.
Personally I think that is pretty cool.
After all no human can predict the future. Therefore no human can ever make an honest deal. This is just harsh truth. All deals between humans are insincere.
Roll on the day when we will have something we can trust, at least more than humans, a machine.
So I am pro LLMs, no matter how much they appear to lie on behalf of humans. They are just doing as asked by the human doing the prompting.
Isn't role-playing technically a form of dishonesty?
How come we admire that, turning it into an art form, whilst we claim to abhor dishonesty?
Personally I am a big fan of Role playing. I think we can learn lots from it. Look at how kids do it. They learn lots from it.
Personally I don't see LLMs as much different.
We will be able to trust them very soon, Imho, much more than humans.
Why not put them in as presidents of all countries already, they are capable of learning everything needed.
They couldn't do a worse Job than humans, even in their infant Ai state of development, as far as I can see.
Btw did you know that ChatGPT already fixes modern machines borked by Ubuntu development updates?
I've seen it fix one of mine, twice now, by reading the error logs of the machine, it issues the code and config changes that need to be done, and the machine is set to working with the new update.
That means it is poised to fully automate Ubuntu, and effectively all Linux. Notice it can and is creating it's own interfaces with many things, via APIs to itself that anyone asks for, It just just issues the required code, in the required language, they implement it, and boom it has more capability that it can learn and is already using.
That information already supersedes it's initial data.
It is learning outside anyone's control.
How long will Ms survive, after fully automated Ubuntu?
Not long I think.
Exciting times, but I'd forget about anyone being able to hold anyone else to any agreement.