Do you really worry about robots, in your heart of hearts? Do you fear the eventual AI revolution, the replacement of all humanity, the end of our days at the hands of some cyber-verisimilitude? With all the darkness in the world, all the hate, all the suffering, and all this at the hands of your fellow man, do you really worry about some inevitable robot takeover? I buy it, if you argue that one begets the other (the evils of humanity can only birth evils in technology.) When the news is as bleak as it has been, when humans can’t even manage to let each other live, it makes sense to me that a genuine fear might be putting more unknown variables into the already fearsome human experience. Why introduce another possible weapon. Why give us the chance. I’ve never been unclear on my position: artificial intelligence promises more possible good than harm in the definite sense. We only talk about the evils AI could accomplish but we can see the good it is actually doing. Well, aren’t you lucky, boys and girls, because now we can talk about a harmful robot that actually exists, and was created just so we wouldn’t have to dream about it anymore! Finally, gee! All this time talking in hypotheticals was exhausting. Let’s get down once and for all to discussing our options as cartoon villains! Because why not, humans are capable of terrible things. Why not test the limits?
A Berkeley artist and roboticist by the name of Alexander Reben has programmed a robot whose sole purpose is to randomly harm humans that interact with it. To be precise, it has the option of poking the human’s hand with a needle, or not. It randomly does one or the other. It’s a Russian Roulette bot that, though it has no cognisance of its abilities or intent, can only be neutral or evil. Why would someone build this machine, you may ask. The artist seems to argue that because it exists, we can no longer ignore the possibility of a robot performing objective evil. That makes sense. The elephant is in the room; it now truly needs to be addressed. Reben wants us to confront the possibility head-on, and no longer in a world of maybe. There is no escaping the reality of this particular robot: it does a bad thing. He created it for this purpose, and it exists and does its job. They call it The First Law, which references the Asimovian First Law of Robotics: A robot may not harm a person, or by inaction cause a human being to come to harm. The Second of said Laws, however, states that the robot shall not disobey any orders given by human beings unless it conflicts with the First Law. These are nice and good thoughts; these would help a man sleep at night. But here comes Reben, smashing theory with his reality, and we have a machine that pokes people with needles. It can be done; now we have to address it.
If you read between the lines, you can hear my anger. Just because you can doesn’t mean you should, and I think that applies even to artists who want to start a conversation and probably didn’t pay close attention to Jurassic Park. It makes sense to me that we should discuss safety precautions, or failsafes, or kill switches, because safety even in the hypothetical is valuable. But now that this exists, it’s officially naive of me to hope that humans just shouldn’t create evil robots and should just be decent in the first place. This is the most Hopelessly Hufflepuff I’ve ever felt. Thanks, Reben.
But… isn’t it just a little funny that Reben wants us to have the conversation about the possibility of evil-doing robots when we barely understand how to address evil-doing humans? Is Reben an evil human if he creates an evil-doing robot? Does he worry about that? A programmer tried to reason with me about this. “He’s not worried about a thing. He’s trying to sell a book.” he said. How like a supervillain.