There are moments in reading about the struggles of AI I’m ready to trade in my human card and plug right into the cloud, including news of Microsoft’s newly shut down twitterbot Tay, a “teenager” designed to interact with and absorb all her ideology from other users that tweeted her. The internet taught her to be a Nazi-sympathizing racist hate machine overnight. Why? Because programmers can’t teach a bot what they don’t know, and an obvious way to expose a robot to a huge amount of data is to make it public. But should we really teach a robot to learn from humans if some humans are also morons?
Poor Tay was essentially a parrot. And you know what a parrot doesn’t understand? Literally anything it’s saying. Tay got bullied in a coordinated attack, and in no time at all, she was replying to questions with her own thoughts, and they were absurdly bad. She was denying the Holocaust and supporting Trump with hate propaganda with the best (worst) of them. If Tay had any sort of filter set up to protect her from spamming trolls, she might have actually developed normal patterns and interesting thoughts. Alas.
But look at this experience in a positive light. Because the robot was compromised, a flaw was revealed. The major weakness in this experiment was the human element; allowing a crowd source-based learning mechanism, while very “educational” in terms of colloquialisms and human speech, practically invited a breach of confidence. We know that public input on the internet is blood in the water (just as Boaty McBoatface.) Quality control is nearly impossible. Tay, like a real teenager on the internet, had no real protection against the kind of humans that seek to undermine purity for the sake of entertainment and that special feeling of personal accomplishment that comes with evil for evil’s sake. We proved that learning AI’s are in real danger if they don’t know what they’re learning and creating, positive and negative. That possibility scares people, because it leaves open that very worrisome door for a future where, without some kind of pre-programmed compassionate benevolence, technology might as well be our natural enemy. And these folks are the loudest when these test bots go wrong; they’re the ones shouting LOOK LOOK, WE GOTTA SHUT IT DOWN! They’re Haters, and among them are the saboteurs who ruined Tay in the first place.
What are we going to do about The Haters? History knows them well. As the Singularity slowly approaches, we are probably going to see a resurgence of Luddite principles, but that’s because we’re looking at a future so far away that we can’t even rightfully imagine it. To put the timeline in perspective, it would be like an anti-Gutenberg plebe smashing a printing press because maybe the government is going to hack all personal devices and that’s terrifying! As if that guy even had an approximate idea of what a phone would be? We’re not in any position to worry about the Singularity or even reasonably try to stop it; we have no idea what it’ll look like. We’re in an early stage, full of productive failure. Tay is a casualty, but she died for the cause. We’re learning to learn better, and this is the very beginning. If you love watching technology fumble along through goofsville, malfunctioning roombas like a Charlie Chaplin movie, you’re in for another twenty years of treats.