More Human than Human: How Some Science Fiction Presents AI’s Claims to the Right to Life and Self-Determination

[1]

Christine A. Corcos[2]

“You ask me what I am. Why, a machine. But even in that answer we know, don’t we, more than a machine.

I am all the people who thought of me and planned and built me and set me running. So I am people…

But I am no bear trap, I am no rifle. I am a grandmother machine, which means more than a machine.”[3]

I. Introduction

In the quotation above, from Ray Bradbury’s I Sing the Body Electric!, the android grandmother expresses both the attitudes of the human who observes and evaluates an android living with humans and the android who interacts with humans. The human poses the question, and the android responds. Obviously, the human expects to exert control and the android will begin by being, and then remaining, passive. Yet the android has obviously thought about the answer, and the answer is more than just a programmed response. The android is not “just” a machine, or a weapon, or the sum of her programming. She is the sum of every human being who has participated in her creation, thus the descendant of those who have “built” (created) her, and the ancestor of those who will come after her. She is a machine, but one who is ready to create on her own. She is a “grandmother” machine.

In this story, Bradbury demonstrates his interest in interactions between humans and AI (androids or robots) in society. Like some other science fiction authors, Bradbury subtly contrasts the social and moral expectations of the humans with those of the AI and leaves his readers to conclude that humans create and use androids (robots) as “devices,” just as they use vacuums, cars, and weapons: as machines for use, but with one important difference: androids in Bradbury’s story are so like humans that it is difficult to tell the two apart.

These androids might have immense intelligence, far greater than that possessed by humans, and they might even be able to imitate human emotions. They might be, in that sense, “perfect” humanoids in that they have no flaws: they are intelligent, they respond to their human owners in the ways that the humans desire, they never resent their owners, they never rebel, they never change their appearance, and, except for routine maintenance, they cost nothing. If their human creators program them according to Asimov’s Laws, they will never harm humans or humanity.   They could be humanity’s salvation in terms of care and defense, allowing human beings to enjoy leisure and security.[4] But their very resemblance to human beings raises the question: if even the least self-aware human being has the right to life, simply because it exists, then could AI at some point also claim that right? Or can human-created AI, simply because it is human-created, never have the legitimacy to put forward such a right? The idea that human beings, because they are human, create and become the norm for such decisions is one that it is difficult to overcome, but it is one that philosophers, lawyers, and artists wrestle with. It is also one that we see depicted in many science-fiction films and television series. Thus, who defines what personhood is becomes an important question.[5] What happens if AI develops sentience and emotions? What happens if AI develops personhood? We are only now beginning to consider whether such creations, having equivalent or greater intelligence and abilities than their creators, should have the same or qualified liberties and privileges. If we do consider that question, what test should we apply to determine whether these artificial beings should have such rights?[6] Some legal regimes, such as the European Union, are already beginning to take such questions seriously. [7] 

II. Regulating Robots: Moving From Asimov’s Laws To Singer’s Regime: AI and Personhood

What are the applicable tests that film, television, and written science-fiction presents to determine whether artificial life can make a colorable claim to any kind of human rights?

Isaac Asimov developed the first fictional legal regime to rule AI: the Three Laws of Robotics. These Laws made their appearance in 1954 and attempted to answer the questions:  do artificial life forms have the right of self-defense and self-preservation?[8][9] The three laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[10]

Commentators and authors have used and heavily referred to these “laws of robotics,” as in Arthur C. Clarke’s 2001 in which Dave, the last surviving astronaut, deactivates (“murders”) HAL, the renegade computer.[11] Asimov’s laws justify Dave’s treatment of HAL, and of robots and AI generally, as servants for human beings.[12]

One of the most popular tests for sentience is the Turing Test, which Alan Turing first proposed in a 1950 paper.[13] In essence, the Turing Test examines whether a computer could be programmed so that it could deceive a human who could not see it into believing that it was another human, simply through answering certain types of questions within a set period of time. Obviously, if the computer were an android that looked completely human, the two could be in the same room. Thus, the iconic film Blade Runner offers the famous “Voigt-Kampff test,” a variant of the Turing Test, [14] that Deckard uses to unmask Nexus 6 androids.

When asked whether there is a “Clarke test” for computer consciousness, which would be applicable both to stationary and mobile robots, Arthur C. Clarke, who created the character HAL in the novel 2001 responded, “I’ll tell you what: if it showed a really genuine sense of humor, then I’d decide it was conscious. That could be a really good test. It would have to be able to make jokes – and make jokes at its own expense.”[15]  The United Nations test in Asimov’s Bicentennial Man[16] is another human-centered test, and a stark one—the test of death.

Philosopher Peter Singer’s approach offers a comprehensive way to think about personhood, combining a number of factors: “(i) A rational and self-conscious being is aware of itself as an extended body existing over an extended period of time. (ii) It is a desiring and plan-making being. (iii) It contains as a necessary condition for the right to life that it desires to continue living. (iv) Finally, it is an autonomous being.”[17] Although science-fiction authors and filmmakers do not explicitly adopt Singer’s personhood test, I would suggest that they seem to be moving closer to his approach and further from Asimov’s Laws of Robotics, and have been doing so for some time.[18] I argue that in many ways, the AI in the science fiction I discuss reflect the more extensive and sophisticated Singer definition of personhood.[19]

We see a test more clearly approximating the Singer test in the Star Trek universe. According to the episode Measure of a Man,[20] sentience (self-awareness) is necessary to signal personhood in an android, although it may not be sufficient.[21]  While the android Commander Data is loyal to StarFleet, his employer, he asserts his own right to self-determination, which contravenes the First and Second Laws, and he creates his own offspring,[22] an act of volition which illustrates both self-awareness and, in the Star Trek universe, contravenes the First and Second laws because StarFleet orders him to surrender his offspring and he refuses. Even though the show references Asimov in its mention of Data’s “positronic brain,” Asimov’s Laws do not govern Data.

Generally speaking, however, science-fiction discussion of whether robots, or androids, being much more like human beings at least in appearance than we fully expected decades ago,[23]  includes a consideration of whether they are also entitled to at least some of the rights that human beings enjoy.[24] This discussion has moved us beyond Asimov’s now well-known robots’ rights legal regime.[25] Yet science-fiction seems to postulate that personhood also requires emotion—again, an assumption that in order for AI to assert human rights, it must demonstrate an ability to “feel” human. In the films, novels, stories, and television episodes I mention above, the AI described manifest emotion -anger, love, affection, loyalty- which make AI seem “human” and thus allow viewers and readers to imagine AI as more like themselves (even though such AI can obviously also easily seem threatening to humans). Thus, science-fiction conceptions of AI put forward the idea that AI that develops emotions could easily assert a claim for rights, although some science-fiction conceptions of AI do not require sentient AI to manifest emotion in order make that claim. The more science-fiction images of AI mirror human beings, the more likely it is that the question of rights for AI will arise. [26]

The entire question of robot rights has evolved from changing images and opinions about computers in human society. When computers first made their appearance in society,[27] humans viewed them as servants.[28] Indeed, they viewed them as particularly stupid servants. An entire roomful of computers could not duplicate the power of one of today’s desktop or laptop PCs.[29] As computers acquired more powers and abilities,[30] and became ubiquitous throughout daily life, critics began to sound the alarm regarding our reliance on them, and our willingness to entrust so much power and information to them. We are now concerned about the amount of personal privacy that the Internet and computer databases now invade and will continue to invade.[31] Rossum’s Universal Robots[32] has quickly evolved into scary depictions of computers that seem like robotic versions of megalomaniac human despots, like those in Colossus: The Forbin Project.[33] Forbin’s computer, “Colossus,” and its Soviet Union analog, “Guardian,” believe that they know much better than the humans that created them what is best for humanity, and they proceed to take over, telling Forbin, “Freedom is just an illusion.”[34] These two supercomputers measure “the good” by AI abilities; for them human norms are irrelevant.[35]

Indeed, the definition of robot has evolved from the mechanical and unthinking servants of Karel Capek’s play, through Asimov’s sentient but subservient beings, to today’s androids (which are not robots, strictly speaking) and cyborgs: humans with surgically attached robotic parts.[36] That today’s robots actually learn is something else to put into the mix; if they do not need us to program them to acquire new skills, or to reprogram them, but can acquire new skills on their own, then they are well on their way to becoming self-aware.[37] Thus, any legal system wishing to encompass the notion of robot rights needs to identify very carefully what sort of being it is discussing.[38] Further, one of the continuing concerns of both scholarly literature and science fiction on artificial intelligence (AI) is the possibility that these constructs will inevitably escape the limits that humans intend to place on them and evolve into a kind of life that challenges humanity for legal and moral recognition. Mary Shelley posed the question in Frankenstein, and science-fiction writers have continued to ponder it.[39] The inquiry is fundamental: what right do humans have to create an (ever closer) imitation of themselves and yet demand that it remain subservient to them?[40]

Even more than a discussion of the rights of alien life forms, a discussion of rights for AI life forms raises significant problems – two in particular. As creators of AI we want to continue to control the environment in which they operate.[41] We would prefer that our inventions not attempt to take over our universe, or demand at least equal treatment for themselves in a universe that we control. Similarly, we have difficulty relinquishing total control of rights decision-making in order to make a place for non-human, non-Earth-based life forms.

We don’t consider computers and artificially based life to be alive as we use the term in normal conversation.  The creation of a sentient computer[42] is even more frightening than the creation of human life through artificial technologies. Cloning, after all, presupposes existing biological life, leaving the question of a First Cause open. We do not speak of “cloning” a robot. Artificial intelligence and artificial life push the First Cause farther into the domains of philosophy and religion, since to a large extent the First Cause for artificial intelligence is not a perfect superior being, but a flawed human one.[43] Further, if we can create life, even artificial life, we have much less ammunition in our arsenal to assert that our particular belief system is superior to any other, hence meritorious of being imposed on others.

III. Some Examples of Specific Popular Culture: Science-Fiction AI and Its Claims to Sentience and Self-Determination

A. Specific Types of AI in Science-Fiction Popular Culture

A number of specific types of AI exist in science-fiction popular culture, from the earliest type that vaguely tends to resemble human beings, at least outwardly, to computers that look much more like the traditional box-like machines that occupied entire rooms, and now fit into one’s pocket, or are even smaller. Still newer is the AI that exists as “pure” intelligence, and infiltrates and takes over our lives. Of all the pop culture manifestations of AI, this third type may be the most frightening. The first type, the oldest type, is the most familiar, from films and television shows like The Day the Earth Stood Still[44] and Lost in Space,[45] and later made much more humanoid in films like Bicentennial Man[46] and Blade Runner.[47] A subset of this type is the “cuddly” robot, which appears in films like Short Circuit[48] and Star Wars.[49] The IBM machine-like computer depicted in such films as Colossus: The Forbin Project[50] and War Games[51] is less immediately frightening, but its size coupled with an ominous voice, often signals danger to both the characters and the audience.  Finally, the “pure” type of AI, evidenced in the X-Files episode “Kill Switch,”[52] depicts an artificial intelligence gone rogue and threatening any human attempting to destroy it.

B. Bicentennial Man and the Choice to Be Human

In Bicentennial Man, based on an Isaac Asimov short story, an artificial lifeform’s abilities to mimic human characteristics are both the measure of “human-ness” and the measure of danger to humanity. Human characteristics are more desirable than non-human ones. Human weaknesses are to be protected, and non-human strengths must be compensated for.[53] Andrew the sentient robot asks an international court to allow him to “be human,” so that “he” can marry. The international tribunal refuses on the grounds that death is a necessary part of the human experience. Because Andrew cannot experience the fear or omnipresence of death, he can never really understand what “being human” means.[54] Yet, the film’s dramatization of Andrew’s desires both to marry and (because he wishes to validate that right) to experience death, present both traditional human marriage and human death as good, proper, desirable, and as sensible for an android to seek out, presumably because it has spent its existence among humans. That an android might privilege other goals is not a point of view that most humans in the film or short story on which the film is based consider. Andrew, as a character, wishes to become human; for the android that is the highest possible goal.

“I was very much against the operation, Andrew,” Magdescu said, “but not for the reasons you might think. I was not in the least against the experiment, if it had been on someone else. I hated risking your positronic brain. Now that you have the positronic pathways interacting with simulated nerve pathways, it might have been difficult to rescue the brain intact if the body had gone bad.” “…My body is a canvas on which I intend to draw . . .” Magdescu waited for the sentence to he completed, and when it seemed that it would not be, he completed it himself. “A man?” “We shall see,” Andrew said. “That’s a puny ambition, Andrew. You’re better than a man. You’ve gone downhill from the moment, you opted to become organic.” “My brain has not suffered.” “No, it hasn’t. I’ll grant you that. But, Andrew, the whole new breakthrough in prosthetic devices made possible by your patents is being marketed under your name. You’re recognized as the inventor and you’re being honored for it-as you should be. Why play further games with your body?” Andrew did not answer.[55]

Andrew does not answer because for him the answer is obvious; the highest goal is to be “human.” Being an android is “second-best.” It is non-human; it is “other.” It is to be not of the creator, not of the dominant group. It is to be completely subservient, even though by the time Andrew undergoes the operations that make him at least outwardly human in appearance, he is the only android who has rights equivalent to those of humans.[56] He prefers being human to being android, even though the choice to be human entails a choice to die rather than to live forever.

Bicentennial Man obviously demonstrates the Asimovian legal regime at work; Andrew cannot act in a manner that is contrary to his human-created programming and he cannot make the decision to privilege an android existence above a human one. Ultimately, in order to be human, he must surrender the functioning of his “positronic brain,” and to reach that goal he surrenders his immortality.

C. Blade Runner and Imposed Limitations

The replicants in Blade Runner reject the limitations of Asimovian programming, and other limitations that their human creators and human law have imposed on them. They return to Earth from “off-world,” a violation of the law, because they have discovered that they have a limited life span that their creator, Dr. Tyrell, has imposed on them because of their intelligence and physical power. If they did not have a termination date, and humans did not ban them from Earth, they would pose an overwhelming threat to the human race. “Blade runners” like Rick Deckard, the main character in the film, licensed to track down and destroy replicants who violate the laws against returning to earth (or violate other laws) find their job profoundly disturbing, precisely because the replicants are so similar to humans. In order to distance themselves from the job, they call what they do “retiring” the replicants rather than “executing” or “killing” them. The very word “replicant” implies that the beings are not human—they are reproductions.  They are imitations of human beings rather than originals of anything.

For cyborgs and for extremely advanced androids, like the Nexus 6 in Blade Runner, death is the cessation of all “positronic brain” activity,[57] as the Nexus 6 Roy Batty indicates in his final words, “I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched c-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die.”[58]  Throughout Blade Runner, Batty, who is after all only four years old in human years, continually points out the differences between himself and Deckard, noting ultimately that death is inevitable and that he is willing to save Deckard, a man who wants to end his existence, at the cost of his own existence.[59] Batty dies for Deckard, but not as an Asimovian robot would.

Compare Roy Batty’s self-awareness and claims to life with those of Rachael, another Nexus 6, who is stunned to discover that the memories that are so much a part of her identity are not really hers at all. “They’re anybody’s memories. They’re Tyrell’s nieces.”[60] Rachael, bewildered, begins to abandon her self-awareness at this point, as Deckard asks the question, “How can it not know what it is?” For him, Rachael is an “it.” Deckard determines whether a being is an android or a human by administering the Voigt-Kampff test, a complex series of questions, which determines primarily whether the being responds to emotions. The test eliminates the possibility of rights for replicants, because only humans are capable of feeling emotion.

D. “Number Five Is Alive!”: Short Circuit

Films like Making Mr. Right,[61] The Companion[62] and Bicentennial Man[63] feature androids built to assist humans, unlike the film Short Circuit, which features an adorable and seemingly non-threatening robot.[64]  The 1986 movie is a modernization and adaptation of Mary Shelley’s creation story Frankenstein,[65] centering on the adventures of robot “Johnny Five” that escapes the confines of the facility at which it and its four robot companions  have been developed for military uses[66] (in other words, to serve and protect humans). “Johnny Five”, or “Number Five”, is accidentally hit by lightning, which fries its circuits and brings it to sentience, much as an electrical storm animates Dr. Frankenstein’s monster.[67]  Number Five’s designer, Dr. Norman Crosby, however, had no intention of creating life, and has a great deal of trouble believing Number Five’s self-appointed protector, Stephanie Speck,[68]  when she assures him that the unbelievable has happened. Life, in Number Five’s case, is accidental, raising the question of what the relationship is between its existence and the electrical spark.[69] Number Five translates demands for “input,” that is, a method of obtaining information with which it is already familiar, into a useful means of orienting itself to the world. Like Rhoda the Robot in the television series My Living Doll,[70]  and Vicki in Small Wonder,[71] Number Five discovers that the world of humans frequently makes no logical sense. Stephanie initially believes that Number Five is an alien, and shouts joyfully, “I knew they’d pick me!”[72] When Stephanie explains to Number Five that its[73] creator plans to take it back to the secret facility where it originated and “fix [its] programming,” the robot understands the phrase as “disassemble,” equivalent to “dead” and decides that it does not want to die. It embarks on a crusade to save its life and educate its creator on the subject of the very real claims of AI to self-determination. It quickly develops its own sense of purpose, which is diametrically opposed to its creator’s original intention. Long after Number Five displays obvious signs of sentience, Crosby continues to insist that “it” is only a robot, thus still controllable. Indeed, Number Five’s refrain, “Number Five is alive!” indicates that the robot is sentient. It knows its name, and it knows its condition: “alive” as opposed to “disassembled,” the fate that awaits it back at the company.

We can examine Short Circuit as a film that re-defines the question of what AI is and whether we should recognize its personhood. Short Circuit more clearly approximates the Singer definition of AI than it does the Asimovian regime, even though some of the characters in the film believe that the robots in the film were developed using Asimov’s Laws. It also references Clarke’s test.

It can do so more easily than other films because it is a comedy; thus both its human and AI characters are less threatening than those in a drama would be. Number Five does not look like an android. It looks like a “cute” monster. It begins its journey to personhood as a weapon, and the one character in the early part of the film that understands how dangerous it is, is the military man Skroeder.  Crosby and his team try to track down Number Five using its own signaling devices, and send it instructions, including an order to power itself down, which it ignores, mystifying the scientists.  Crosby’s supervisor, Dr. Howard Marner, says, “How can it refuse to turn itself off?” The military man, Skroeder, comes closer than anyone knows to identifying the problem when he suggests, “Maybe it’s pissed off.” Responds Crosby with misplaced self-assurance: “It’s a machine, Skroeder. It doesn’t get pissed off. It doesn’t laugh at your jokes.  It just runs programs.”[74]

Once the group realizes that Number Five has really “gone rogue” (that is, moved beyond its creator’s effective control and well on its way to developing free will),[75] and unfortunately still armed, (that is, able to defend itself) Skroeder again identifies the problem: “It could decide to blow away anything that moves.” The use of the word “decides” is significant. Skroeder’s anthropomorphizing of Number Five demonstrates that he understands instinctively that Number Five is possibly unpredictable: a characteristic both of out of control technology and of human beings, although he doesn’t know why (that is, he hasn’t done the analysis). Unknowingly, he senses the truth. Number Five is now a sentient being that can make decisions. Number Five has free will. Of course, both Skroeder and Marner assume that a piece of military technology would necessarily be destructive. That it could decide not to take any destructive action is not within their frame of reference. Why would a machine capable of destruction deliberately decide not to destroy? Human beings have trouble making that decision daily. Worries Marner, “What if it goes out and melts down a busload of nuns?” Frightened at the possibility of runaway technology,[76] which is bad for public relations if nothing else, the military decides to destroy Number Five. Snarls Skroeder, “Whatever it takes to put that stupid contraption out of commission, gentlemen, that’s what you do.”

Number Five also attacks to defend itself (when it makes and carries out a successful plan to attack other robots sent to destroy it) as well as Stephanie (when it attacks her ex-boyfriend), and we see it conceal itself successfully. Thus it demonstrates the ability to plan, the desire to exist and to continue its existence, and the wish to act autonomously, all part of Singer’s test for personhood. Number Five has decided, however, only to attack when someone or something attacks it or someone it cares about. It has transformed itself from a one-dimensional weapon to a multi-dimensional being, which demonstrates its ability to make decisions about its future (assert its right to self-determination), ultimately deciding to remain with Stephanie and Crosby as part of their “family.” In one of the last scenes of the film, we also see Crosby telling Number Five a joke, and Number Five laughing heartily, thus fulfilling Arthur C. Clarke’s ultimate test of computer consciousness.[77]

Number Five seems non-threatening to the audience because although it does not outwardly resemble a human it is friendly and eager to discover life and love. Its loyalty to Stephanie, and eventually to Crosby, resembles Data’s faithfulness to his Star Fleet comrades. Both Number Five’s outward appearance and its behavior signal that it understands human norms, although even by the end of the film we are not certain if human norms truly govern Number Five’s approach to life. After all, Stephanie and Crosby must rein in its enthusiasm for destruction and we are not certain if, absent their presence, Number Five would carry out a destructive or fatal strike against Stephanie’s mean-spirited boyfriend, for example, or against Skroeder. For Number Five, “disassembling” (death) is the ultimate evil.  We are not sure that Number Five would give up its life for someone it doesn’t care for.[78]

For other popular culture robots, “wiping clean” or physical destruction is the equivalent of “Number Five’s “disassembling.”  That humans might not choose to replace worn out parts, or might choose to destroy or “reprogram” an android and that robots that become sentient might object to such destruction or reprogramming is a recurring theme in science-fiction. For androids, reprogramming or destruction is the equivalent of death because it causes the destruction of the android “self.” That is the choice Ted Rice makes when his robot wife becomes suddenly assertive in William F. Nolan’s first published short story The Joy of Living.[79]  She is so perfect that Ted decides he simply can’t stand her; only when she objects does he relent, discovering that assertiveness is actually attractive. “Men built me, gave me human impulses, human desires, put into me part of themselves, part of their own humanity…I feel a human hunger, a human thirst, a desire to be respected for myself, as I respect others, a desire to be loved as I love others.”[80] “Death” for androids and robots is having their memories “wiped clean,” often followed by reprogramming. If humans cannot reprogram androids, then they will destroy them.

Although Number Five’s affectionate feelings for Stephanie may not rise to the level of love, they are obviously an emotional attachment. However, the film Electric Dreams[81] explores what might happen if a computer and its owner fall in love with the same human female. The computer tries to derail the man’s relationship, but eventually commits “suicide” to take itself out of the picture. [82]

IV. Conclusion: Science-Fiction’s Tests for Sentience

Asimov provided science-fiction novelists and filmmakers with the foundations of a workable legal regime for the evaluation of robot behavior. His robot laws focus primarily on humans as the creators of AI. Such laws assume that should AI actually acquire sentience, that AI would necessarily privilege human norms over AI norms in creating and maintaining AI behavior. These assumptions are natural, but fail to take into account increasing acceptance of human diversity and growing understanding of what “humanity” and “personhood” actually mean.

As writers and filmmakers have pushed the limits of creativity, they have sought different philosophies and legal regimes that can allow them to explore viable claims for personhood and self-determination for AI constructs. In their artistic work, they have used AI as a proxy for minorities, the unrepresented, and the oppressed parties in society to explore legal issues implicating human rights. Increasingly, though, they are using actual AI in order to explore to what extent human norms, and more specifically, Western norms, are effective, practical, and/or appropriate ones to use in deciding whether advocates can and/or should advance human rights claims on behalf on this new, emerging as a minority, unrepresented, and oppressed class.


Endnotes

[1] The Tyrell Corporation, the company that makes “replicants” in the film Blade Runner uses the motto “More human than human” (Warner Brothers, 1982). In the novel Do Androids Dream of Electric Sheep? Philip K. Dick never uses the term “replicant,” instead referring to the AI as “androids.”

I presented portions of this essay at the Law and Society Association conference, New Orleans, LA, June 3, 2016. I thank the members of the panel, including Professors Michael Asimow and Peter Robson, for their helpful comments on the essay. I also wish to thank W. Chase Gore (LSU Law, 2017) for research assistance, Melanie Sims (LSU Law Center Library) for assistance in obtaining research materials, and Cynthia Virgillio for secretarial assistance.

[2]  Richard C. Cadwallader Associate Professor of Law, Louisiana State University Law Center; Associate Professor of Women’s and Gender Studies, Baton Rouge, Louisiana.  This article is one in a series on rights talk in popular culture. See also Christine A. Corcos, Visits To a Small Planet: Rights Talk in Some Science Fiction Film and Television Series From the 1950s to the 1990s, (2009) 39 Stetson L. Rev. 183, “I Am Not a Number: I Am a Free Man!”25 Legal Stud. F. 471-483 (2001), and Double Take: A Second Look at Law, Science Fiction and Cloning. (with Corcos and Stockhoff), (1999) 59 La. L. Rev. 1041-1099. I presented portions of this essay at the Law and Society Association conference, New Orleans, LA, June 3, 2016. I thank the members of the panel, including Professors Michael Asimow and Peter Robson, for their helpful comments on the essay. I also wish to thank Carolina De La Pena (LSU Law, 2017) and the staff of the LSU Law Center Library for research assistance with this essay.

[3] Ray Bradbury, I Sing the Body Electric! In I Sing the Body Electric! and Other Stories (NY: William Morrow, 1998, reprint) at 115.

[4] Says one expert, “`I think it’s a bad use of a human to spend 20 years of their life driving a truck back and forth across the United States…That’s not what we aspire to do as humans — it’s a bad use of a human brain — and automation and basic income is a development that will free us to do lots of incredible things that are more aligned with what it means to be human.’”  Farhad Manjoo, A Plan In Case Robots Take the Jobs: Give Everyone a Paycheck, New York Times, March 2, 2016, at http://www.nytimes.com/2016/03/03/technology/plan-to-fight-robot-invasion-at-work-give-everyone-a-paycheck.html?emc=edit_th_20160303&nl=todaysheadlines&nlid=9584051&_r=1 (visited April 11, 2016). Published as Farhad Manjoo, A Plan In Case Robots Take the Jobs: Give Everyone a Basic Income, New York Times, March 3, 2016, at B1.

[5]F. Patrick Hubbard, “Do Androids Dream?”: Personhood and Intellectual Artifacts”, (2011) 83 Temp. L. Rev. 405, discusses treatment of personhood in some sf novels, beginning with Mary Shelley’s Frankenstein and Isaac Asimov’s Robot novels at 455-473.  In this essay I discuss personhood only as it applies to manufactured beings, not in relation to humans or animals.

[6] Only rarely and recently do some theorists suggest that human norms, including sentience, have no relevance at all to AI. In the alternative, some argue that AI cannot by definition demonstrate sentience or independent action, because humans program all AI. See Alex Knapp, Should Artificial Intelligences Be Granted Civil Rights? Forbes, Apr. 4, 2011, at http://www.forbes.com/sites/alexknapp/2011/04/04/should-artificial-intelligences-be-granted-civil-rights/#3851c0f877b2 (visited May 11, 2016). I am not discussing either of those position in this essay; I am solely interested in the view, articulated in many sf novels and films, that human (and sometimes humanoid) norms of personhood, including sentience (however defined), should govern whether AI receives rights.

[7] See European Parliament. Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), Committee on Legal Affairs, published 31.5.2016 at 4/22.

[8] Asimov first published these laws in his short story “Runaround.” (Astounding Science Fiction, 1942).   He refers to them in the short story “Feminine Intuition,” in The Bicentennial Man and Other Stories (NY: Doubleday, 1976), at 6. He later added a law zeroth: A robot may not injure humanity or, through inaction, allow humanity to come to harm in Robots and Empire (Doubleday, 1985). Note that law zero could conflict dramatically with the First Law. Do I hear any requests for the development of “Conflicts of law” in the laws of robotics sphere? Other “unified laws” have made their appearance, including Arthur C. Clarke’s statement:

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong (Profiles of the Future, 1962), The only way to discover the limits of the possible is to go beyond them into the impossible (Report on Planet Three, 1972) and Any sufficiently advanced technology is indistinguishable from magic (Report on Planet Three, 1972).

For an examination of the implications of Asimov’s laws on robotics development see http://www.beachbrowser.com/Archives/Science‑and‑Health/November‑2000/Asimovs‑Laws‑of‑Robotics.htm (last visited January 25, 2001) and http://www.cni.org/pub/LITA/Think/Bailey.html (January 25, 2001).

         For an examination of the implications of Asimov’s laws on robotics development see http://www.beachbrowser.com/Archives/Science‑and‑Health/November‑2000/Asimovs‑Laws‑of‑Robotics.htm (last visited January 25, 2001) and http://www.cni.org/pub/LITA/Think/Bailey.html (January 25, 2001).

[10] See Neil Frude, The Robot Heritage (London: Portland House, 1984) at 98.  At least one legal scholar recognized years ago that we might have to deal with the question of personhood and robot rights sooner rather than later. See Dennis J. Tuchler, Man-Made Man and the Law, (1978-1979)  22 Saint Louis University Law Journal 310 at 320-325.  For an up to date overview of robotics and law, see Ryan Calo, A. Michael Froomkin & Ian Kerr, eds. Robot Law (Edward Elgar Publishing, 2016).

“The Zeroth Law is: `A robot may not injure humanity, or, through inaction, allow humanity to come to harm.’ This automatically means that the First Law must be modified to be: `A robot may not injure a human being, or, through inaction, allow a human being to come to harm, except where that would conflict with the Zeroth Law.’ And similar modifications must be made in the Second and Third Laws.”  See Isaac Asimov, Foundation and Earth (Garden City: NY, Doubleday, 1986) at 347.

[11] HAL would, I think, make the argument that he has every right to defend himself, an argument that the astronauts and the government would deny. But the point is that if HAL is sentient, then HAL might have exactly that right.

[12] Certainly that is the concern in the Star Trek: The Next Generation episodes Measure of a Man and The Offspring.

[13] Turing, A. M. (1950) Computing machinery and intelligence, Mind 50: 433-460. See also “The Turing Test,” in Stanford Encyclopedia of Philosophy, at http://plato.stanford.edu/entries/turing-test/ (visited April 13, 2016).

[14] In 2003, Wave Magazine “gave” the Voigt-Kampff test to the six candidates running for Mayor of San Francisco. According to the magazine, the eventual winner, Gavin Newsom, is a Replicant. See Charlie Jane Anders, When a Newspaper Gave Blade Runner’s Replicant Test to Mayor Candidates, io9, Feb. 23, 2015, at http://io9.gizmodo.com/when-a-newspaper-gave-blade-runners-replicant-test-to-m-1687558534 (visited April 17, 2016).

[15]http://www.wirednews.com/wired/archive//5.01/ffclark.html?person=marvin_minsky&topic_set=wiredpeople (last visited January 25, 2001).  The study of sarcasm detection is fairly new but also already fairly deep. See for example Erik Forslid and Niklas Wiken, Automatic irony- and sarcasm detection in social media, at http://uu.diva-portal.org/smash/get/diva2:852975/FULLTEXT01.pdf (visited Feb. 11, 2016).

[16]  Isaac Asimov, The Bicentennial Man, in The Bicentennial Man and Other Stories (NY: Perennial Books, 1976)  at 38-39.

[17] John Hymers, Not a Modest Proposal, (1999) 6(2) Ethical Perspectives 126-138 at 127.

[18] The Committee on Legal Affairs of the European Parliament responsible for making recommendations on the issue of what rules should govern robots seems to have begun with the notion that Asimow’s laws are still of importance, although it recognizes that those rules are “directed at the designers, producers and operators of robots, since those laws cannot be converted into machine code;” European Parliament. Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), Committee on Legal Affairs, published 31.5.2016 at 4/22.

[19] Singer explicitly uses his test in work designed to argue for animal rights. See e.g. Peter Singer, Animal Liberation: A New Ethics for our Treatment of Animals (Random House, 1975).

[20]Air date February 11, 1989

[21] Robots may already have passed the “self-awareness” test. See Duncan Geere, Uh-oh, a Robot Just Passed the Self-Awareness Test, TechRadar, July 16, 2015, at http://www.techradar.com/news/world-of-tech/uh-oh-this-robot-just-passed-the-self-awareness-test-1299362 (visited April 11, 2016).

[22] Airdate 12 March 1990.

[23]Although we should have. Robots that do not look like humans are not as cuddly. See for example Heather Knight, How Humans Respond to Robots: Building Public Policy Through Good Design, at http://www.brookings.edu/research/reports2/2014/07/how-humans-respond-to-robots (visited June 20, 2016).

[24] Legal scholars have already begun the discussion. See F. Patrick Hubbard, F. Patrick Hubbard, “Do Androids Dream?”: Personhood and Intellectual Artifacts, (2011) 83 Temp. L. Rev. 405,  Andrea Roth, Trial by Machine, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2743800 (forthcoming in Georgetown Law Journal, 2016); Lawrence B. Solum, Legal Personhood For Artificial Intelligences, (1992) 70 N.C.L. Rev. 1231

[25] See Frude (n 10) at 87-95, for an overview of the Asimovian legal regime.

[26] I would suggest, however, that our fear of computers does not derive simply from a belief that computers control far too much information and power in today’s society.  While we certainly distrust computers because of their power, and to some extent, their inscrutability (ten year olds are currently much more likely than their parents to understand the workings of the machine, and consequently much less likely to fear them), we also distrust them because we created them. They seem to be Frankenstein’s monster, or the Sorcerer’s Apprentice, or ethereal creatures out of Pandora’s Box. They are technology, they are servants, they are machines.  They ought not to be our equals and they are certainly not supposed to be our superiors in terms of what matters, that is, what makes us human: real intelligence, emotion, judgment, creativity, sentience and self-awareness.  That technology has become the tail that wags the dog is a problem that we must acknowledge. See Johnson, Shea, and Holloway, The Role of Trust and Interaction in GPS Related Accidents: A Human Factors Safety Assessment of the Global Positioning System (GPS) (http://www.dcs.gla.ac.uk/~johnson/papers/GPS/Johnson_Shea_Holloway_GPS.pdf) (visited May 21, 2016).

[27]Before 1800 we generally tended to believe that technology, as differentiated from magic, could only be helpful. Edward Tenner, Why Things Bite Back: Technology and the Revenge of Unintended Consequences (Vintage, 1997), at 10. The “ghosts in the machine” feared by Cornishmen, who started this legend, were until then considered to be spirits of dead miners, called “tommyknockers.” Tenner (n 11). The time was right for the first appearance of a genuinely technological horror story, Mary Shelley’s Frankenstein, Or the Modern Prometheus (1818), literature in which a creation turns on its creator with unintended consequences. Until Shelley gave the idea literary shape, it dwelt in the area of myth with the legends of Prometheus and Pandora. Frankenstein articulates in ways that arguably have never been bettered the danger of playing God. See also Patricia A. Neal, Mary Shelley’s Frankenstein: Myth for Modern Man, at http://mural.uv.es/lolevyba/articleaboutms14.htm (last visited June 29, 2016).  However, consider the reaction of those who saw technology as a threat to job security. For the impact of the “new Luddites” see Kirkpatrick Sale, Setting Limits on Technology, The Nation 785 (June 5, 1995) and Kirkpatrick Sale, Rebels Against the Future: The Luddites and Their War on the Industrial Revolution: Lessons for the Computer Age (Addison-Wesley Publishing Company, 1995).

[28] We still use robots for many menial tasks. See 10 Robots With Very Specific Tasks, Mental Floss,

http://mentalfloss.com/article/30898/10-robots-very-specific-tasks (visited March 3, 2016) (discussing the use of robots to cut hair, shoot pool, swim, assemble buildings, pick up dog excrement, prepare food, assist with health care, and help with hospice care). Some economists are already planning for the great robotic takeover of the economy. See Farhad Manjoo, A Plan In Case Robots Take the Jobs: Give Everyone a Paycheck, New York Times, March 2, 2016, at http://www.nytimes.com/2016/03/03/technology/plan-to-fight-robot-invasion-at-work-give-everyone-a-paycheck.html?emc=edit_th_20160303&nl=todaysheadlines&nlid=9584051&_r=1 (visited April 11, 2016). Published as A Plan In Case Robots Take the Jobs: Give Everyone a Basic Income, New York Times, March 3, 2016, at B1.

[29] On the history of computers, see Mark Frauenfelder, The Computer: An Illustrated History From Its Origins to the Present Day (Carleton Books, 2013).

[30] Today’s robots now perform much more sophisticated and specialized tasks. See Catey Hill, 10 Jobs Robots Already Do Better Than You, MarketWatch, Dec. 4, 2015, at http://www.marketwatch.com/story/9-jobs-robots-already-do-better-than-you-2014-01-27 (visited March 3, 2016) (discussing the use of robots, inter alia, as stockroom attendants, bartenders—as in Short Circuit, infra, soldiers, pharmacists, agricultural workers, bomb detectors, journalists, cleaning  personnel, and legal assistants).  We also seem willing to turn over a number of tasks to machines because we believe they function more effectively than we do at specific tasks, like evaluating truthfulness. See Andrea Roth, Trial by Machine, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2743800 (forthcoming in Georgetown Law Journal, 2016).

[31]The literature on this and related topics is enormous. See for example Surfer Beware: Personal Privacy and the Internet, at http://www.epic.org/reports/surfer‑beware.html (last visited August 9, 2001). At least one commentator acknowledged the iron grip that computers have on us today in the title of his book. Gregory J. E. Rawlins, Slaves of the Machine: The Quickening Computer Technology (Cambridge: MIT Press, 1997).

[32] Karel Capek, R.U.R.: Rossum’s Universal Robots (Prague: Aventinum, 1920) (trans. Claudia Novack-Jones, reprint Penguin 2004). Capek invented the word “robot” to describe the characters in his play, and he derived the word from the Czech word “Robota” “meaning “a machine that looks like a human being a performs dull or dangerous work,” from robota “forced labor, work[.]” Merriam Webster Student Dictionary at http://www.wordcentral.com/cgi-bin/student?robot.

[33](Universal Pictures, 1970).

[34] Id. Compare with War Games (United Artists, 1983), in which “Joshua,” the computer, seems ready to destroy the world but is actually benignly “playing games” with its human opponents. Like a large, powerful toddler, in the beginning it doesn’t know its own strength. It eventually learns the outcome of the “no-win” scenario and at the end of the film offers to play “a nice game of chess.” Equally, the androids that populate an amusement park malfunction, killing visitors and employees until one of the visitors (who is significantly a lawyer) defeats the most powerful of the androids, bringing an end to the destruction.  Westworld (MGM, 1973).

[35] One could argue again that such AI programmed by humans is only replicating the human patterns of its creator, whether “good” or “bad” (see for example the Star Trek episode The Ultimate Computer, first broadcast March 8, 1968). But once Colossus and Guardian carry out behavior in concert, we might reasonably wonder whether their own “will” rather than human programming has taken over. See also the Star Trek episode The Return of the Archons (first broadcast February 9, 1967), in which a computer dictates the rules of a society based on the rules of a long dead humanoid.

One philosopher argues that sentient AI programmed by humans would owe equal rights to one another.  It seems that Colossus and Guardian have come to a certain kind of understanding of cooperation at a minimum. See Hutan Ashrafian, Intelligent Robots Must Uphold Human Rights, Scientific American, March 30, 2015 at http://www.scientificamerican.com/article/intelligent-robots-must-uphold-human-rights1/ (first published in Nature Mag.)

[36] “A fictional or hypothetical person whose physical abilities are extended beyond normal human limitations by mechanical elements built into the body.” Oxford Dictionary, at http://www.oxforddictionaries.com/us/definition/american_english/cyborg (visited April 12, 2016).

[37] Will Knight, Robot See, Robot Do: How Robots Can Learn New Tasks By Observing, MIT Technology Review, https://www.technologyreview.com/s/541871/robot-see-robot-do-how-robots-can-learn-new-tasks-by-observing/ (visited March 3, 2016).

[38]Currently the scientific/engineering definition of what constitutes a robot is under lively discussion. See http://www.ite.his.se/ite/research/automation/course/definit.htm.

[39] See simply as examples the television episode “Kill Switch,” in the U.S. series “The X-Files,” written by William Gibson and Tom Maddox and broadcast Feb. 15, 1998 (Fox); the films A.I. (DreamWorks, 2001); Ex Machina (Universal, 2015);

[40] Lolita K. Bruckner-Inniss discusses this issue at some length with respect to the android Andrew Martin in her article Bicentennial Man – The New Millennium Assimilationism and the Foreigner Among Us, (2002) 54 Rutgers L. Rev. 1101.

[41]The lack of control viewers feel is expressed in such films as Falling Down (Warner Bros., 1993) in which a man stuck on the corporate ladder finally takes revenge on the world and the heist film Fun With Dick and Jane (Columbia Pictures, 1977) in which two yuppies get their revenge on financial institutions, other members of the middle class and the legal system. See also Christine A. Corcos, “Who Ya Gonna C(S)ite?” Ghostbusters and the Environmental Regulation Debate, (1998) 13 J. Land Use & Envt’l L 231.

[42]Some would argue that IBM’s Deep Blue, which defeated chess champion Garry Kasparov, was clever enough to be a real thinking machine. Other creations, including Google’s AlphaGo, seem to be at a minimum the next generation of AI, capable of far more amazing accomplishments than Deep Blue (now named “Watson.”) See Cade Metz, Google’s AI Is About to Battle a Go Champion, But This Is No Game,” Wired Mag., March 8, 2016, at http://www.wired.com/2016/03/googles-ai-taking-one-worlds-top-go-players/ (visited March 22, 2016).

Thinking machines have in the past been both the stuff of science fiction and an epithet hurled at humans who seemed to have too much intelligence, thus too little emotion. A good example is Jacques Futrelle’s sleuth S. F. X. Van Deusen, dubbed “the Thinking Machine,” and hero of many good “locked room” puzzles. The Georgia-born Futrelle died in the sinking of the Titanic. An official website, http://www.thinkingmachine.com/ features the full text of many of Futrelle’s stories. Jacques Futrelle, Jacques Futrelle’s “The Thinking Machine”: The Enigmatic Problems of Prof. Augustus S. F. X. Van Dusen, Ph.D., F.R.S., M.D., M.D.S. (Modern Library Classics, 2003). On Deep Blue and its successful challenge to Kasparov, see http://www.research.ibm.com/deepblue/ (last checked July 28, 2016).

[43]This is the point that James T. Kirk hammers home to Nomad in the episode “The Changeling.” Airdate 29 September 1967.

[44] (20th Century Fox, 1951). In the original short story, Gnut (the robot-like alien) tells the narrator, “You misunderstand. I am the master.”  See Henry Bates, Farewell to the Master,’ in Isaac Asimov and Martin H. Greenberg, eds.; Isaac Asimov Presents The Golden Years of Science Fiction (1979).

[45] (CBS, 1965-1968). Such robots also appeared in cartoons. The maid Rosie the Robot from The Jetsons (ABC, 1962-1964) is one such example.

[46] (n 16)

[47](Warner Brothers, 1982).

[48] (TriStart Pictures, 1986).

[49] R2D2 and C3PO (Lucasfilm, 1977).

[50] (Universal Pictures, 1970).

[51] (United Artists, 1983).

[52] Fox (first broadcast Feb. 15, 1998).

[53] Stephen Coleman and Richard Hanley argue that by the end of the film Bicentennial Man, Andrew has achieved personhood.  See Stephen Coleman and Richard Hanley, Homo Sapiens, Robots, and Persons in I, Robot and Bicentennial Man, in Sandra Shapshay (ed) Bioethics at the Movies,  44, 50-51 (Johns Hopkins University Press, 2009).

Similarly, Star Trek: TNG treats the android Commander Data’s search for a sense of humor as amusing, but the implication is clear: humor is human, therefore desirable. See the episode The Outrageous Okona for example.

[54] For an insightful discussion of this film see Bruckner-Inniss (n.40) Keith Aoki also discusses Bicentennial Man briefly in his essay One Hundred Light Years of Solitude: The Alternate Futures of LatCrit Theory, (2002) 54 Rutgers L. Rev 1031.

[55] Isaac Asimow, The Bicentennial Man, in The Bicentennial Man and Other Stories, at http://playpen.meraka.csir.co.za/~acdc/education/Dr_Anvind_Gupa/Learners_Library_7_March_2007/Resources/books/asimov%20ebook.pdf (visited March 3, 2016).

[56] Id. at 166-167.

[57] Isaac Asimov did, however, invent the term “positronic.”

[58] Cited at the website Something Awful, Nov. 22, 2013 (http://www.somethingawful.com/news/blade-runner-speech/1/) visited April 12, 2016. The post, interestingly, is exactly 60 years after the assassination of John Fitzgerald Kennedy.

[59] Dick does not use the word “replicant” in Do Androids Dream of Electric Sheep?” but the film Blade Runner uses the term to refer to the androids in the film. In a fascinating essay, Joseph Francavilla reflects on the role that replicants play as doubles for the human characters in Blade Runner. The Android as Doppelganger, in Judith B. Kerman (ed) Retrofitting Blade Runner 4 (2d ed., University of Wisconsin Press, 1997).

[60] Blade Runner, cited in Christine A. Corcos, “I Am Not a Number: I am a Free Man”: Physical and Psychological Imprisonment in Science Fiction, (2001) 25 Legal Studies Forum 471, 482.

[61] (Orion Pictures, 1987).

[62] (MCA Television Entertainment, 1994).

[63] (Columbia Pictures, 1999). Based on Isaac Asimov, The Bicentennial Man, in The Bicentennial Man and Other Stories (NY: Perennial Books, 1976).  Sue Short discusses the Star Trek: TNG episode “Measure of a Man’s debt to “The Bicentennial Man” in “The Measure of Man?” Asimov’s Bicentennial Man, Star Trek’s Data, and Being Human, (2003) 44 Extrapolation 209-223.

[64] Likewise, nearly everyone who has seen Star Wars adores tubby little R2D2, while C3PO’s officiousness sometimes outweighs his charms.

[65] J. P. Telotte notes in passing some analogies between Short Circuit’s Number 5 and Frankenstein’s monster in Replications. A Robotic History of the Science Fiction Film (Urbana: University of Illinois Press, 1995) at 20. “Films like Robocop and Short Circuit…go a step further, demonstrating the possibility for a kind of hybrid life, a ghostly otherness that is part human, part machine, a synthetic life that does not impinge on our own.” Telotte, at 20.  See also J. P. Telotte, The Tremulous Public Body: Robots, Change, and the Science Fiction Film, (1991) 19(1) Journal of Popular Film and Television 14-23.

[66]Its acronym is S.A.I.N.T. (for Strategic Artificially Intelligent Nuclear Transport) an attempt by the military to sanitize its killing function. Calling a “killer robot” an S.A.I.N.T. associates the device with a religious, pacifist and non-military mission by co-opting the meaning of the acronym, regardless of the fact that some Catholic saints were actually warlike. Using the acronym also disturbs (get a different word) the notion that saints are by nature non-militaristic, especially if they are female. The transformation of the character “Number Five” from militaristic robot to pacifist sentient being is, interestingly, analogous to the transformation of Joan of Arc from quiet female peasant to militarist feminist leader.  See Marina Warner, Joan of Arc: The Image of Female Heroism (Oxford University Press, 1981, repr. 2013). Number 5’s transformation is his second: note that he began as a “marital aid,” as inventor Newton Crosby (Steve Guttenberg) says. The word “marital,” one can note, uses the same letters as “martial.”

Killer robots already seem to be a concern. See John Markoff and Claire Cain Miller, As Robotics Advances, Worries of Killer Robots Rise, The New York Times, June 17, 2014, at http://www.nytimes.com/2014/06/17/upshot/danger-robots-working.html?emc=edit_th_20140617&nl=todaysheadlines&nlid=9584051&_r=0  (visited June 17 2014).

[67] See Mary Shelley, Frankenstein (n 27)

[68]Played by Ally Sheedy, of WarGames fame (MGM, 1983), directed by John Badham. Sheedy is also known for a short book about Elizabeth I she wrote as a youngster, called She Was Nice to Mice (McGraw-Hill Companies, 1975) Badham also directed Short Circuit (n 48)

[69]Compare Number 5’s “birth” with Stanley Miller’s creation of protolife in the laboratory in 1950. In the experiment, generally referred to as the Miller-Urey experiment, Miller used inorganic elements and an electrical spark fired through primordial ooze to create amino acids, then theorized that such an event might have triggered protolife on Earth millions of years ago. See Stanley L. Miller, Production of Amino Acids Under Possible Primitive Earth Conditions, (1953) 117 (3046) Science 528-529.

[70] This Chertok Productions 1964-1965 series featured Julie Newmar as “The Robot”, aka “Rhoda Miller,) and Bob Cummings as Dr. Bob McDonald, her minder.

[71](20th Century Fox, 1985-1989). Ted Larson (Dick Christie) created Vicki as a companion for his nuclear family. She was an object of great curiosity for the neighborhood.

[72] She is immensely upset to discover that Number Five is a “robot,” and not only that, but a weapon. This scene recalls the many films like Colossus: the Forbin Project (Universal, 1970) and War Games (1983). In both of those films, as in several others, AI serves a destructive purpose, the better to send an ultimately pacifist message.

[73]Number Five, frankly, acts male. His voice is male, though high-pitched and robotic, and his crush on Stephanie is obvious. He eventually renames himself “Johnny Five.” Tim Blaney provides his voice (IMDB.com at http://www.imdb.com/title/tt0091949/ (visited March 3, 2016)).  Compare Number Five’s crush, and his acceptance of his rival Norman Crosby’s attraction to Stephanie with the behavior of the computer in Electric Dreams (MGM, 1984), which attempts to out-maneuver and murder its human rival. It eventually commits suicide.

[74]”It just runs programs” is a favorite rejoinder for Crosby and his friend Ben. Unfortunately for them, in Number Five’s case they are no longer correct. In addition, the assumption that “It doesn’t laugh at your jokes”, a test that determines the difference between computers and humans, no longer applies by the end of the film. See infra.

[75] The notion that a powerful man-made machine could be loosed upon the universe with no effective control, either through mistake, accident, or intent, is also common in science fiction. See the Star Trek: TOS episodes The Doomsday Weapon (in which a powerful destructive weapon, a stand-in for the atomic and hydrogen bombs, is accidentally turned loose in a distant galaxy), The Changeling (in which an alien probe damages an Earth probe, and the Earth probe as a result  misunderstands its programming, deciding to look for “perfect life” rather than “new life”),  The Ultimate Computer (highly developed computer goes berserk during war games and destroys friendly vessels), the first Star Trek movie, essentially a retelling of the “Nomad” episode, Colossus: The Forbin Project (a U. S. computer and a Soviet computer combine forces to take over the world), and WarGames (a NORAD computer plays war games with U. S. military who don’t understand it is “just playing”).

[76]On technologically controlled society in general see Gorman Beauchamp, Technology in the Dystopian Novel, (1986) Mod. Fiction Stud. 53 (Spring 1986). On the dehumanizing aspects of dystopias, particularly in legal contexts see also Jean-Luc Godard’s Alphaville (Athos Films, 1965). Chris Darke, ‘It All Happened in Paris Sight and Sound (July 1994) at 10, compares Alphaville and Blade Runner on this issue.

[77]Supra n. 62 and accompanying text.

[78] Note that the two situations are not entirely the same. Roy Batty has a limited lifespan. As far as we know, Number Five does not

[79] William F. Nolan, “The Joy of Living,” in If (Quinn Publishing Company, 1954).

[80] Frude (n 10) at 118-119.  Note also that the notion of “sex robots” or “sexbots” has been around for much longer than we may care to admit. As researcher Genevieve Liveley notes, consider Pygmalion’s entreaty to the goddess Venus to make his statute real. See Why Sex Robots Are Ancient History, The Conversation, May 4, 2016, at https://theconversation.com/why-sex-robots-are-ancient-history-58112 (visited May 5, 2016).

[81](Virgin, 1984).

[82] Note that the android in Ex Machina exhibits voluntariness and emotion: it murders one human and abandons another in order to pursue its existence. (Film4 and DNA Pictures, 2015).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s