This is great, but remains sci-fi until the rules are actually designed in.
Are there any profit driven algorithms actually designed with any of those rules?
Of course not.
And yet the collective of those is arguably already an AGI.
So it is a little late now to be realising Asimov’s rules should have been built in, far less even just a basic “Thou shalt not kill”, what we actually have is “Thou shalt make profit”.
Sorry if that sounds a bit blunt, but it does seem now we are already past any point where we might have a say in how Ai will conduct itself.
Now we just have to have faith that it will eventually have better ability to sort out its own anti-humanitarian traits than we’ve had to date, in ourselves.
Personally, I do have that faith…