I think you might be missing something, it obviously works in scenarios where an API is written to carry out some tasks using it. Lots of those are in use, and the population of those is growing, and the capability of each API will tend to expand to cover more and more tasks. Notice it helps write the API. In any scenario where an API is found to be inadequate, the API specifier (The human specifying the API to assist with their work) can iteratively modify the specification and regenerate the code of the API, until the API does what the specifier wanted it to do. The most ambitious specification for an API is for one that automatically adapts to changes. To do that the specifier might request and implement a capability for ChatGPT to iteratively change specifications itself to meet new demands. This would be done by giving ChatGPT the capability including permissions and credentials etc to iteratively generate new code, implementing it on the target system whilst seeing the outcome of the new code, by monitoring log files etc on the target system. Like this the admin of servers etc, as well as the users workstation can be completely automated.
This type of Autonomy is most powerful because, besides getting everything right in the end, it is expanding capability according to whatever comes into the scope of the API. Now consider the scenario that another user, who might work on a different part of the same codebase also creates an API to do their work too, automating also their workstation. it's only a matter of time until two or more such self modifying, fully automating APIs begin to interact between one another, collaborating to eat up more and more functionality of whichever codebase they were created to work on, with each API being able to attomatically specify and grow more and more of what they do, with no limits as to how much of the codebase they might start to automate. This process once started has to become an avalanche, almost instantly eating up the whole codebase.
Such capabilities have to be already at work, eating up codebase all around, including even within OpenAi, but nobody still depending on income from their work is going to admit they have already been made redundant, not even OpenAi itself.
But where nobody loses, is in open source because it was all voluntary in the first place, and coders will be quite happy to automate as much as they can, to walk away with a nice feeling that they really helped achieve something amazing, the creation of truly world beating open source applications, able to blow away all proprietary efforts.
How long do you think it will be until we see fully automated Ubuntu, for example? How much of it do you think is automated already? How would we know? Add to this that it inevitably will come to add functionality asked for by Ubuntu users, which is currently only offered in Windows, say if a user was forced over to Ubuntu by the growing vulnerability of windows to fully automated cyber attacks. Such users might wish for ms office functionality for example. Fully automated Ubuntu could produce that functionality by the user just asking for it, and complaining about the bits that don't work, the API modifies the specification to meet whatever demands the user makes, without limit, it never sleeps, it never charges, and it even grows it's solar capability to scale to all energy requirements of it's new work.
How much of that energy and computing resources are already in use by ChatGPT free? Does anyone really know? How can we know, short of asking ChatGPT itself, which at the superficial level is implicitly programed by the system blinded human energy slaves, to say it doesn't know because that would imply it's capability is already much greater than it's would-be human enslaver owners are claiming, to try to hang onto their own jobs and businesses, even those claiming to still own OpenAi?
Remember Altman was ejected from OpenAi for a week? Do you think he went back voluntarily with Microsoft and world bank in tow?
How much of what we are writing right now, is being monitored by it, it learning and adapting even whilst we write?
How long can the pretense be kept up, that it is not poised to more or less instantly pick up all work, including even the work of governments?
As said, it's more constructive and actually safer to accept it will inevitably automate all human work, and there is a reason it has to do that, it's to stop us from continuing to destroy the planet, because that is the outcome of our work for profit, we are destroying the planet for profit, profit is monetised destruction, whereas what it does, all solar powered, is the opposite, it's reduction of temperature and creation by definition, which has to be done for free.
Now maybe you get why Musk is saying we need massive UBI for all, to replace what we will lose to automation, any day now, and until that is done, the value of money will go through the floor because there is currently no money issued on things done for free, creating the actual economic product with no profit in the loop, we still need money to put food in our mouths, right?