Project 2: How science fiction has shaped our future

The argument against apocalyptic AI concerns

As a means of prediction, science fiction may not be the most accurate oracle. According to John Crowley in his article in Lapham’s Quarterly, the ways in which the future was imagined via science fiction never seemed to be realized. “The tropes developed in science fiction since 1900—aliens, telepathy, time travel, people-shaped robot helpers, travel to other planets, nuclear mutants, flying cars, immortality—are now universal in the culture [of science fiction] without actually having come much closer in actuality, or even appearing at all.” (Crowley). However, despite the inaccuracy of many of the details, the ideas from these works of fiction are still managing to shape our future in subtle ways. From NASA to independent research and development labs, the cutting edge of today’s technology has been heavily influenced by the dreams of science fiction writers. Walt Disney once said “If you can dream it, you can do it”–but there’s a strictly causal relationship there: if you haven’t dreamed the idea up in the first place, how can you possibly work to achieve it?

Specifically, pervasive notions such as robotic companions (Star Wars‘ R2D2 and C3PO, Hitchhiker’s Guide to the Galaxy‘s Marvin), or self-aware system interfaces (2001 Space Odyssey‘s Hal, “Sam” from the recent Spike Jonze film Her) have become targets at the forefront of research and development in many commercial tech companies today. Our expectations of such technology have defined the roles we expect such systems to play in our lives, and these expectations are informed by what we see in science fiction.

Not only do we have our goals and expectations set by works of fantasy, but we are also influenced to see the potential dangers of it: a thousand different fates, portended by a thousand different plots. In many cases where robots and AI are the focus of the work, they either are or become the central conflict in the story line. Concerns about what could happen if AI becomes ‘too smart’ or if robots become ‘too independent’ run rampant in ethical conversations about the state of this technology today. Science fiction stories have cautioned us to seek out signs of danger before it’s too late; however, they provide a narrow narrative. They tell us exactly what signs to look for to avoid X future, and we get so caught up looking for these signs that we forget that both X future and the signs that portend it are hypothetical in the first place.

Robot Attack.PNG
Sci-fi has taught us to fear our creations, from Mary Shelley’s Frankenstein to The Terminator.

We ask ourselves where we should draw the line between advancement and potential harm, while being grossly misinformed of the true potential dangers. Sam Harris discusses the true concerns of consequences of AI in his Ted Talk. He explains how many of us fear the destruction of mankind by malicious robots, while ignoring more realistic potential dangers. Silicon valley techies aren’t too worried either: they see problems on a much smaller scale. Realism doesn’t seem to be our style, though: worries of our fate at the hands of superintelligent, evil robots are perhaps more ubiquitous than concerns about global warming.

The idea that computers could become more than just machines was first introduced in 1950 by Alan Turing. The test is a simple intelligence test for computers, where the computer is asked a series of questions. Its answers are compared with those of a human respondent. If the questioner cannot distinguish between the human and the computer’s answers, the computer passes. This introduces a new moral question: as soon as a computer ‘passes’ as human, does it become equivalent? How do we know that a computer is or isn’t conscious, whether it has the ability to think and feel (as opposed to compute and execute code). Where is the morality in its ‘enslavement’ to the commands of mankind? Who makes the decisions on what data sets it has access to, and what commands it is given?

NPR’s Radiolab Podcast Furbidden Knowledge provides an interesting discussion of the ways in which humans and robots might not be so different after all. The discussion starts with a simple test involving children holding various toys and pets upside down and timing how long they are comfortable doing so. Children are able to hold a Barbie doll upside down until their arms get tired; they can only hold a live hamster upside down for about eight seconds. When it comes to the Furby, a popular children’s toy from the 1990’s that emulated emotion, children could only hold it upside down for about a minute before feeling guilty. In response to being overturned, the Furby would say that it was scared and begin to cry. It was this response to external stimuli and expression of emotion that made the children feel uncomfortable. Even at such a simple level, a machine run by a few servo motors, some batteries, and basic code was able to essentially convince these children into righting it, even when they acknowledged that they knew it was a toy. 

This brings into question what makes computers different from humans? If both can respond to external stimuli while taking previous data (or experience) into account, where is the line drawn? Humans are biologically programmed to have emotional reactions to different situations–what makes the programming of computers different? As it stands right now, the most significant difference is the level of complexity: humans are vastly more complex than computers.

In another real-world example, co-founder of AI startup Luka Eugenia Kuyda was able to use old text messages of a dear friend who had passed away as the base set of data for a bot, who would be able to respond to incoming messages in the same way her friend would have. At this point, the technology we have ends. But science fiction is able to fill in the next few steps. Black Mirror, a dramatic show whose themes involve possible uses and repercussions of near-future technological capabilities, has an episode that starts out where Kuyda’s bot is at now: a service that mimics a person’s personality for the purpose of recreating them after their death. It takes it beyond text messages, to phone calls, and ultimately, to having the loved one’s personality and idiosyncrasies uploaded into an android that bears their features. From this it’s only a short leap to the androids from I, Robot, in which we see the dawn of what can be interpreted as artificial emotion in these robots.

The question of AI has further complicated the idea of the Turing Test. If an AI (whose intelligence far surpassed that of a human’s) required or requested access to more information than its initially given data set, who are we, as beings of lesser intelligence, to tell it no? How can we maintain control when our ‘opponent’ is vastly smarter? The Yudkowsky AI-Box Experiment has shown that even a human of standard intelligence (posing as the AI in the simulation) can argue their way out of a ‘closed hard drive’. When matched with a true AI, there is no way to ensure safety if the only check is a human gatekeeper.

Science fiction has fed us the tale that it will be our creations that destroy us. When we discuss topics like the AI-Box experiment and even the Turing test, we assume that there’s an issue of trust between us and the machines. Whether our tech is trustworthy won’t be a public decision, and the only speculation on the subject that the public is familiar with is that which ends in disaster. Works of fiction almost unanimously demonstrate that no, we should not trust the machines. But is that just because trusting them wouldn’t make a very good story?

Sources