Whistling in the Dark or Why I'm Not Worried About AI
Films about robots: Two different views
I watched Ex Machina and Big Hero 6 almost exactly a week apart. I’m a science fiction fan and so not surprisingly, robots have always fascinated me. There is no doubt in my mind that when I was studying computer science in university and learned of the Turing test, I immediately saw it as a real-world precursor to Isaac Asimov’s 3 laws of robotics. So I’m pretty much a sucker for science fiction films that have robots. Of course, part and parcel of this is that I, too, wonder about sentient robots and what it would mean for humanity.
Cinematically and story-wise, these 2 films are worlds apart. What they do have in common is that both are tales of early interactions between humans and robots. Stories about robots that exhibit sentience and there the similarities between these 2 tales end. Or do they?
Both films have a common thread. This thread is the notion that robots, imbued with pure logic, can perceive the world better than us. That somehow, having had the parameters of existence encoded as a series of 0’s and 1’s and unencumbered by messy analog brains, they understand reality more clearly than we do. Armed with this pure understanding, they either go on to be our saviors or our destroyers.
And so in Big Hero 6, we see a robot built by one of the characters as a university project, a robotic nurse, quickly learn that helping people requires not just the physical acts of care, but empathy and morality. Because one of the things that we as human beings learn growing up is that no matter how empathetic we are, no matter how well attuned we are to the suffering in the world around us, we cannot solve the world’s problems on our own. Thus, we must make moral and ethical decisions in which, recognizing our limitations, we accept this truth that we cannot help everybody. It is a credit to Big Hero 6, that we are willing to suspend our disbelief and accept that the robot, Baymax, is able to make this cognitive leap.
In Ex Machina, a much darker thread pulls the story along. Here we see a robot that, while ostensibly being tested for its ability to pass the Turing test, develops its own agenda triggered by the arrival of a stranger, Caleb, in its life. That this ends tragically for the humans comes as no surprise. Again, the story compellingly shows us Ava, the main robot, also cross this cognitive bridge. Ava develops a moral and ethical code from its interactions with Caleb, a code that allows it to perceive the slavery and servitude of its situation.
As an aside, after I saw Ex Machina, I wondered if the movie had instead been a horror film in the vein of Saw or Hostel whether the audience would have been, in fact, cheering for Ava’s victory over its captor Certainly, Ex Machina has the trappings of one of those films, albeit with less gore.
The recent furor around the development of true AI and what it means to the human race has gained considerable traction in the media. Indeed, we have luminaries like Stephen Hawking and Bill Gates arguing that AI represents the greatest existential threat in human history. So surely we should be worried, right?
I think that the answer is a qualified yes. Qualified because yes, if we sat on our hands and somehow developed these AI imbued robots, complete with a sense of self, a desire to survive (which of course begs the question of a desire to reproduce), morals, and ethics without anything else changing, then yes we would be engineering our downfall. But, I don’t think that’s how this will play out and here is why.
To develop robots that have the capabilities we witness in films like Ex Machina and Big Hero 6 will require us to solve some pretty hard engineering problems along the way. Not impossible, just hard. One of the consequences of solving those problems is that these technologies will almost certainly be adapted for our own use long before we develop fully sentient machines.
My explanation for this is simple. Opportunity and narcissism. Because if money is not an object and we can enhance our own capabilities by adding advanced cybernetics to it, there will be plenty of people who will do exactly that. Robots on the other hand, will be relegated to roles that people cannot or will not perform. Again, because of our narcissistic streak, it is almost a certainty that we will keep the best technology for ourselves and thus once robots become the advanced, world-beating entities that we see in the movies, we will almost certainly be waiting for them. And while it is possible that robots could adopt our ruthlessness in their mission to survive, they’ll almost certainly be playing catch-up. At least in the beginning. Even if we built super-soldier robots a la the Terminator, that does not require that we give them enough intelligence to create other instances of themselves. Even if we were to create robots that could rescue people from burning buildings, we would likely implant a heuristic that allows them to “decide” which person to save. Firefighters may be using morals but arguably firefighting robots do not need sentience, they just need a well-defined set of rules that allow them to optimally save people. That some people would still die would not be any worse than what we have today.
This scenario would play out in situation after situation. Yes, true sentience would allow a robot to solve problems in the same way that humans do, but the fact is that we just don’t need robots that capable. And even if we were to develop robots that could act as companions, i.e. partners and lovers, it’s not clear that we would need true sentience to satisfy that requirement. After all, if robot were able to pass the Turing test, having a relationship with them would be no different than with a human being. I could be going out on a branch here, but if people want to have relationships with robots, surely they want them to be identifiable at some level as a relationship with a robot and not a human. Otherwise, what’s the point? In other words, that “hot” female robot that you just built, if it’s truly sentient, it could just as easily decide that it wants to be partners with some other, presumably more attractive human.
Of course, there is another way to look at this and that is the fact that the biggest existential risk to human existence is not robots, it’s death. We will all die one day. That is a certainty. Moreover, it is almost a certainty that even if humans managed to survive into some unimaginable future, hundreds of millions or even billions of years from now, we will eventually perish from this universe.
If along the way, we manage to build robots that not only have our morals and ethics but our will to survive and they, having been built in our image, carry the flag of humanity into the future, I salute them and wish them luck.