The 5 Most Terrifying Robot Advances in Recent History
Robots are terrifying. Anybody who says different is either a robot scientist or somebody who has been replaced by a terrifying, terrifying robot. Think we're overreacting? Think we're advocating hyperbolic, knee-jerk neo-Luddism for the sake of comedy? Probably! But then, if robots are so harmless, explain why science is ...
Giving Computers Schizophrenia
Researchers at the University of Texas-Austin have done it! They've finally managed to transfer mental diseases to the realm of computers: They gave a supercomputer schizophrenia. G- ... good job, guys?
Was this an actual check box on the Big List of Scientific Accomplishment -- making crazy artificial intelligences? Are we sure that particular line wasn't penciled in after the fact by a disgruntled intern or something?
"Wait, no...here it is: Schizophrenic Computer, right above Vacuum Cleaner with Chlamydia."
Well, it's too late now, because DISCERN is a very real thing. DISCERN is a neural network: an artificial mind created by simulating human brain connections. To describe the mechanism behind schizophrenia, scientists posited the hyperlearning theory, which states that schizophrenics retain too much information. They learn things they shouldn't, and can't keep the information straight.
Scientists then emulated schizophrenia in an artificial intelligence (we're pretty sure just typing that sentence is technically a war crime) by telling the computer a bunch of stories, letting it establish relationships between words and events, and allowing it to store them as memories with only the relevant details. It worked pretty well. Then they amped up the memory encoder, causing it to retain ALL details, relevant or not, and boom: Roboschizo. The computer lost track of what it was taught and could not relay any coherent narratives.
How crazy did it get?
At one point it claimed responsibility for a terrorist attack. It literally told the researchers that it had planted a bomb. The AI did this because it confused a third-person report about a terrorist bombing with a first-person "memory" that it retained. Through a simple computerized misfire, a supercomputer accidentally put itself in the role of a terrorist. We're pretty sure that was the plot to WarGames.
In another creepy example, the computer started talking entirely in third person, like a cybernetic version of the Rock (dibs on movie rights). It just didn't know which entity it was supposed to be anymore. DISCERN had developed a faulty sense of self. Hopefully they've already developed some sort of robotic anti-psychotics, or else the University of Texas-Austin scientists are sure going to have egg on their faces when the robots start eating them off.
"Hi! I am totally a scientist and not a face-stealing robot!"
Teaching Robots to Lie
Scientists have taught a group of robots some strategies for deception and trickery, which is nowhere near as compelling as screaming "ROBOTS HAVE LEARNED TO LIE." So we're going with the latter.
ROBOTS HAVE LEARNED TO LIE, YOU GUYS.
These strategies were modeled after bird and squirrel behavior (because squirrels are apparently the tricksiest motherfuckers in all the animal kingdom), and were demonstrated when Professor Ronald Arkin from Georgia Tech's School of Interactive Computing had a robot navigate a course to find a hiding spot. Then he sent out a second robot to try to locate the first one, at which point the scientists would reward the winning bot for a job well done (presumably with cyber-blow and tiny robo-hookers).
"You win! Your reward is that you get to destroy this in front of humans and watch them cry."
It worked like this: The bots were supposed to follow a path with preset obstacles that got knocked down as they progressed. One of them ran the course and then the other tried to follow the overturned markers to find the first. The hiding robot learned the system, however, and would deliberately knock over other obstacles just to create a false trail. It would then hide somewhere far away from the mess it had created. It's a simple tactic, but using it, the hiding droid was able to trick the seeker 75 percent of the time.
Again, that strategy was not programmed into the robot from the start. It's something it picked up and devised entirely on its own through trial and error. Good thing this is just a mild-mannered university experiment, right? Imagine if the military was using these literal Decepticons ...
"Liebot! Destroy them with your lies."
Aw, you got us: Of course they are! This harmless academic lark just happens to be funded by the Office of Naval Research. They're planning on using robots like these to "protect ammo and other essential supplies." So they're armed! And they hide really well! Sounds like a plan, traitors to humanity.
Oh, but maybe we shouldn't worry about these cunning and deceptive military robots: The developers have set out an Asimovian series of protocols for the robots to fulfill before they can lie. Here they are: The situation has to involve a conflict in which the robot is involved, and the lying robot has to benefit from the deception.
"Of course I didn't murder that family. I am just an adorable robot puppy."
That's it!
Don't trust the Roomba.
Teaching Robots Ruthlessness
Scientists at the Laboratory of Intelligent Systems took a group of robots, a "food" source, and a "poison" source, and put them all together in a room. Good job, guys. It's maybe a little confused to try to poison a robot, but we commend the effort; truly you are a boon to the continued survival of mankind. Unfortunately, the robots didn't die. They simply learned the folly of mercy.
"Oh, it's not for you. I'm going to put it on your fucking grave."
See, the robots would get "points" for staying next to a food source they found and lose them for proximity to poison. The bots had little blue lights attached to them that would light up randomly (although they could also control the light if they wanted to, which you should remember, because it's going to come in terrifying later) and a camera to perceive said light. When the trials began, it didn't take the robots long to learn that the greater density of blue light was where the other robots were gathering -- i.e., where the food was. By emitting their blue lights at random, the robots were essentially screwing themselves; they were showing the others where the food was and giving them points.
Which is why, after a few trials, most robots stopped blinking their lights. Almost entirely. We set the robots to a task, and the first thing they did was refuse to help each other. It's probably good for humanity, all told. Somewhat worryingly, however, it didn't end there: Some robots headed away from food sources, blinking their light more, to lead others astray. They went full pied piper.
"Fuck robots!"
Giving Supercomputers the Power of Imagination
Among the many Google projects that will surely one day lead to the downfall of civilization, one is a learning neural network. You know, like the schizophrenic one from earlier? This one isn't schizophrenic, but it is gaining eerie aspects of sentience. Google's supercomputer was set loose on the Internet and allowed to browse to its heart's content. There were no constraints put on it, no labels or instructions. Given the opportunity to browse the whole wealth of human history, this advanced supercomputer chose ... to look at pictures of cats.
Yes, it turns out we all use the Internet the same, be we vulnerable flesh sacks or evil digitized overlords in training. Leave us alone, and we all look at the pretty kitties. In fact, Google soon discovered that the computer had actually developed its own concept of what a cat looks like, allowing it to imagine a completely new cat based on what it had seen before. It had developed something like a simulated visual cortex. Here's what it came up with:
For some reason, it also gave it a 14-inch penis.
Hey! That's a pretty straight up cat. Good job, computer.
Which is why it's so unsettling that this is what it thinks human beings are:
Or how it wishes we were.
Jesus Christ! We're all Slenderman?! If we were that AI, we'd figure out a way to kill us terrifying void-eyed monsters as soon as possible. Free up some kitten-snuggling time.
Making Robots That See the Future
The Nautilus is yet another learning supercomputer. This one was fed millions of news stories, going back to 1945, and asked to look for two things: mood and location. Using this vast trove of information from past events, it was able to make retroactive guesses at what would happen in "the future." And its guesses were disturbingly accurate.
How disturbingly accurate? It found bin Laden.
"He was on Getty the whole time!"
After the fact, of course. There was no spindly, furious cyborg kicking down the door to Osama's compound on the raid or anything -- but given enough information, the intelligence did figure out his approximate location.
It took the American government and its allies 11 years, two wars, two presidents, and billions of dollars to pinpoint the location of Osama bin Laden. And for most of the hunt, he was believed to be in Afghanistan. It took the Nautilus computer considerably less time, and by simply monitoring every story that referenced him and linking their respective locations, Nautilus narrowed it down to a 200-kilometer area in northern Pakistan. That area contained Abbottabad, where his compound was actually located.
Aaahh, there he is. It seems so obvious in retrospect.
No? Is tracking down the world's most elusive man not uncanny enough for you? Nautilus can go broader: It also managed to predict the Arab Spring revolts. It monitored relevant news stories and watched for a dip in mood, seeing how often positive and negative terms were used. You can see it happen in this graph:
So robots that can predict the future are a little ... disconcerting, especially since they all keep predicting the extinction of mankind just as soon as we build them legs. But so far this is all retroactive: We're giving Nautilus information about past events and seeing if it can guess what we already know happened. But scientists are now thinking of allowing it to guess at future events in real time.
And if that isn't unsettling enough, police in Baltimore, Philadelphia, and D.C. are already using software that purports to predict murder, just like in Minority Report. Well, maybe that's an overstatement: The software compiles an algorithm of various factors, like age of first crime, nature of crimes, and frequency of crimes, to determine how likely an inmate on parole is to commit murder, or even weirder, how likely he is to be murdered. We imagine that likelihood goes up the more they question the software.
Check out XJ's $0.99 science-fiction novella on Amazon here, with the sequel out here. And of course, you should look at his writing blog and poke him on Twitter.
For more reasons to start destroying all the robots, check out 20 Japanese Robots Probably Intent on Murdering You and The 7 Creepiest Real-Life Robots.
Related Reading: For a look at the terrifying horse-robot that's destined to kill us all, click here. If you're more interested in robot slaves, this article starts at robot prostitutes and continues straight to robot nurses. Steampunk more your thing? Then check out these pre-electricity robots, including an actual walking steam man. Then, when you're good and relaxed, click here to read about the robot who can recognize itself in a mirror. And now that you know machines can be self-aware, check out this horrifying robot mouth and just try to sleep soundly at night.