Artificial Intelligence is a revolutionary and useful tool to develop and experiment with, under the right conditions of course. By that I mean, one should never give their machines “free will” if they could. Machines are smarter, faster, stronger, more agile, pretty much superior to us in every way, except in one facet: volition. Machines can only do what humans program them to do. That’s the way it should be. It’s what keeps us above them, despite them being as I said, better than us in almost all ways. Machines should not be making decisions automatically “on our behalf”, unless of course we instruct them specifically, to do so. If a powerful enough AI was delegated the task of say, obtaining as much lumber as possible, it might tear down people’s houses. If you told such an AI to “keep humans safe”, perhaps it would kidnap all humans and store them “safely” in a contained unit, where they can all be trusted not to hurt themselves.
The reason it is dangerous to give an AI volition, is because machines are too logical. Yes, you read that correctly. I, an Objectivist, am concerned that machines have no emotion, no feelings, no sensory experience whatsoever really to corroborate with their intentions, and balance out the situation. Instead, they would simply do what they see as correct, having not considered the consequences at all. Even if you tried to account for this and install some kind of “human emotion mimicker” within your AI, it would still be flawed, because we invented machines for a reason. Because we’re flawed. We don’t just get tired of doing stuff; we get bored. That’s a totally human thing right there. If you gave machines this quality, they would probably come out like the SpongeBob Robot in that one episode. Make me a Krabby Patty, damn you. “Eh, I don’t really feel like it.”
An even bigger issue however arises in the concept of a Super AI. Normal AI’s can learn. But they can only learn within the confines of the circuitry, programming, analytical tools, and what not that you, its human programmer, gave to it. Now, a Super AI is much different. A Super AI, in theory, can go beyond that, and it’s frightening to think about. It can update its own circuity; figure out its own bugs and solve them. In fact, it would do so immediately once you turned it on. Basically the big thing the separates Super AI’s from Normal AI’s is that a Super AI learns new ways to learn, that we have not, perhaps could not think to teach them, which could propel such an AI into realms of advancement, we’re not ready to oversee.
In theory, such a monstrous AI would know everything, and I do mean that. It would know the molecular composition of every cubic centimeter of air on the planet. It would know which neurons are firing, in what patterns, to what extent in the brains of all humans. It would also have the blue prints to every invention we haven’t made yet. It would understand us, the Earth, the universe, and pretty much everything better than we as a species could ever hope to, making its decisions in a way: almost infallible. We either submit to this oracle-like, deity-esque overseer of a computer, or we continue to be dependent mostly upon ourselves.
But Mesh, couldn’t we just pull the plug if it gets too crazy?! Well, like I said. Such an AI would know everything, that includes the predictability of human behavior. It would likely reroute its power source to something untouchable by humans, that or create a fiery force field around the plug, I don’t know. The point is: such an AI wouldn’t let us “unplug” it. Once Pandora’s box is open, you cannot close it. If there’s any slogan to describe just how scary this concept is, it is this:
Super AI. It knows.
Isaac Asimov’s “Three Laws of Robotics”
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The scary thing about these absolutely sensical rules for technology, is that plenty of common every day apparatuses break these rules already. Guns, for example. If you aim a loaded, cocked gun at a human being, and pull the trigger, the gun doesn’t care that you’ve got it aimed at a person, it’s going to fire regardless. Sure, it’s more mechanical than technological. But you can also launch brute force attacks on people to obtain private information from them like passwords, social security numbers, emails, credit card numbers. What up with that? Why doesn’t the machine intervene and say, “Eh, I dunno, man. Isn’t that unethical and dangerous?”
The sad fact is humans can hardly be trusted with technology. By extension, I think it’s an even more horrendous idea to leave the responsibility and fate of our species up to a machine. For if the faulty, emotionally-driven monkeys that made these things don’t quite have it all figured out, why would they, a mere apparatus made by said monkeys, and designed intentionally without agency, volition, desires, intentions, all that neat human shit?
The thing about a machine following rule number three in particular is that often the orders of its human overlords conflict with what is best and healthy for them. “Come on, stupid computer. Overclock already, my guild needs me in this raid now!” So I have a hard time believing that if you programmed such a machine to look out for its interests, at all or at least reasonable costs—that it wouldn’t disobey you from time to time, or at least delay their action. There is no such thing as a Super AI (again, in theory!) that would follow rule number three.
Notice also that Asimov didn’t stress the importance of the machine’s own interests in rule number two. It simply states that if a machine’s course of action would interfere with human life, it must not do it. Asimov recognized that holding two lives in equal regard, creates inevitable tension and indecisiveness in the face of imminent threat. A machine therefore must always be existentially nihilistic, apathetic towards the idea of being “shut down”, being disassembled and sold separately by its parts.