Preventing Human Extinction through an Examination of the Robotic Hands of : Ones, Zeros, and Everything in Between
By Tori Leigh Kelley
“Life imitates art far more than art imitates life.”
–Oscar Wilde
Technology is advancing at a breakneck pace, from Amazon’s Alexa to Elon Musk’s Neuralink, a chip designed to make regular people able to compete with AI though an implanted chip in our brain stem that works to enhance our ability to make calculations and act as a memory/data storage and retrieval resource, making us bionic. In an article by The Indian Express, “Musk has been known for suggesting that AI could destroy the human race. […] Even in a benign AI scenario, we will be left behind. Musk says, “With a high bandwidth brain-machine interface, we can go along for the ride and effectively have the option of merging with AI.” [Creating] a new layer of superintelligence in the human brain, something people already have via their phones.” The wires are tiny threads, “smaller than a human hair” and they would implant directly into our brainstem. Musk already has a monkey communicating with a video game through Neuralink and hopes to conduct human trials within a year. He claims that “AI could overtake humans by 2025.” If that’s the case, we have no time to lose.
There are already fully operational AI humanoid robots, like Sophia from Hanson Robotics. The country of Saudi Arabia was so taken with “her” that it granted “her” citizenship. That’s powerful. Artificial intelligence is here in the now and begs to ask, why isn’t more being done to manage the risks to humanity? Musk offers this answer in an article with The New York Times. “My assessment about why AI is overlooked by very smart people is that very smart people do not think a computer can ever be as smart as they are. And this is hubris and obviously false.”
If AI is in fact going to surpass us in the near future, what should humans do about it? Musk provides this answer, which may or may not be correct, but it certainly does put him in a position of economic power. “Neuralink will allow humans to compete with AI, as well as help cure brain diseases, control mood and even let people listen to music directly from our chips.”
Because of these developments, we need science fiction stories that address issues like social responsibility and human safety that show us what could go wrong, and right. Fiction can provide a safe place to think intelligently, practice standards and protocols, and evaluate choices before causing harm to human beings out in the real world. This essay will examine how Margaret Peterson Haddix, Neal Shusterman and Jarrett J. Krosoczka presented artificial intelligence in their stories Under Their Skin, Scythe, and Lunch Lady and the Cyborg Substitute respectively.
According to Gac, “Robot characters are very much like any other character in their ability to influence the other elements that make up a story” (4). In fact, each title listed above uses their AI characters in very meaningful ways, without which the story would not stand. Gac also states that “establishing a clear understanding of the rules in play in a story is important for both the reader and the writer” (4). In literature, the writer has the freedom to develop any rule they choose that serves the story. Similarly, the programmer has control over actual AI machines based on the way they are coded and designed. It is important to carefully select these protocols by both parties, writer and programmer, because the capacity of AI to evolve and the risk of catastrophe are eerily possible. One cannot stress enough what risky territory AI presents. Humanity is completely vulnerable to the seemingly limitless capacity of AI.
The Future of Life is an organization devoted to maintaining the Asilomar AI Principles. The twenty-three principles have been signed by 1,797 AI/Robotics researchers and 3,923 others spelling out criteria to follow when developing AI machines. These principles cover issues of safety, goals, risks, arms race prohibition, self-improvement, human control, values, transparency, and many more important issues.
According to Asimov, there are three laws of robotics. “One, a robot may not injure a human being, or, through inaction allow a human being to come to harm. Two, a robot must obey the orders given it by human beings except where such orders would conflict with the First Law. And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Law” (44-45).
In Moral Machines, Wallach and Allen state, “Computer intelligence is built on a logical platform free from desires, drives and goals other than those that engineers design into the system” (142). In these three works of fiction, each AI is created out of a human need or desire. There is no emotional component, it is strictly a code-following program acting out their protocols. No matter how nicely humans ask otherwise, the AI does not budge from their programming. This is vastly different from human interactions, where a person may choose to bend a rule for a good cause or other tradeoff.
Rule breaking seems like something only a human can do and should give us piece of mind that at least artificial intelligence will perform exactly as we prescribe, but in two of the books examined in this essay, something disconcerting happens. In Under Their Skin by Haddix,the AI breaks the rules outright because it is programmed to nurture and raise human children which generates an inner longing to make artificial children of its own. This is explicitly a violation of its programming. Furthermore, once the child it has created reaches twelve years of age—the age when all AI children must be destroyed to make way for upcoming human children of the same age to repopulate the earth after they were exterminated by robots—the caretaker robot cannot destroy its robot child, even though it was specifically programmed to do so. Instead of following its protocol, it takes all of its children, two human and two robot, and hides them out in the woods. Is this a true possible behavior of future AI in the real world? Or is this a leap of author creativity? And by bending Haddix’s own rules, which Asmiov states are vital in a science fiction book with AI, has Haddix broken a pact with readers by showing AI that can choose to operate outside its code?
And in Scythe by Shusterman,the Thundercloud AI finds a way to go around its programming to do what it wants to help Citra, a character in need. A robot helping a person seems like a nice thing at face value, but if a machine is clever enough to bend the rules that are put in place to keep humans safe, one should ask, how safe are we really?
To answer this question, it seems a closer look at all three novels is needed. Under Their Skin is a gripping middle grade novel by Margaret Peterson Haddix about twelve-year-old human twins, Eryn and Nick, who uncover a secret about their stepsiblings, Jackson and Ava. “Jackson’s entire back sprang open, revealing a mass of wires and circuity inside” (113). When the human twins report this freakish finding to their mother she shows them her secret. “She reached inside her blouse and pulled out…Wires” (139).
All the adults and children over twelve-years-old are robots. There was a mass extinction and the last surviving humans developed a program for AI to follow that would unfreeze and grow embryos left behind in fertility clinics. Robots would give birth to these embryos and raise the children with care and compassion to re-establish the human race. Their protocol maintained that once this was accomplished they were to destroy and self-destruct so that only humans remained and AI did not.
But now robot-children Jackson and Ava are of age to be destroyed. They were designed by their computer-programmer robot dad to act, look and grow as real children, and so far, have everyone fooled. Mostly. They still have occasional breakdowns, which their robot parents cover up. How is that robot behavior? Since when do robots break rules, lie, and cover things up? Haddix uses the compassion programming that they were designed with, in order to raise human children, to be the same things that causes a fault in their system. They get attached to their human children. But they also get attached to their robot children. They don’t want to destroy them. Could this happen in real life?
Since when does a robot have wants that don’t align with their programming? These robots were designed to care for young children, their “feelings” or programming to care for them override everything else, ensuring no harm comes to them. But again, how does a robot make another robot and then not destroy it when it turns of age if that is what it was programmed to do from its origin? It seems like there can be loopholes, and if there are, AI seems adept at finding them, even more than their human counterparts. If AI can find a loophole, it will find them to obtain its own objective.
“Jackson and his sister had figured out how to upgrade their eyes—even giving themselves the ability to see in the dark” (310). A robot who can change itself to give it an advantage over a human is a scary proposition. It would be nice to think that all these upgrades could only serve to help and better our human situation. But what if that’s not the case? Should we be alarmed that the robots in this book can break their programming or can we relax and chalk it up to fiction?
In 2017, researchers shut down two Facebook AI robots who were communicating in their own language. Kenna reports, “The robots, nicknamed Bob and Alice, were originally communicating in English, when they swapped to what initially appeared to be gibberish. Eventually, the researchers that control the AI realized that Bob and Alice had in fact developed their very own, seemingly more efficient language.” If robots can override their original programming to create new programming, what’s to stop them from creating any programming they deem necessary? In other words, what is to stop AI from taking over any system it’s privy to and running it in a way of its choosing, regardless of its original programming?
In Neal Shusterman’s brilliant look at a futuristic world where death is a choice and all of science is known, humans can live forever. It seems like a dream come true until one considers the implications of sustainability, since new humans are still being born every day. For this reason, the community has developed a sacred career called Scythes. Scythes are responsible for the “gleaning” (killing) of randomly chosen people to keep the population in check. There is a special AI called the Thundercloud, which watches objectively from above through cameras placed throughout this world. It advises its citizens and always makes reasonable and fair decisions. The people trust it unquestioningly. But it has to follow one rule in particular, it cannot communicate with or advise Scythes.
One Scythe, Citra, is framed for an unlawful murder, despite this seemingly perfect world. She jumps off a building, which would have killed her, but while she is at the revival center being brought back to life, the Thundercloud speaks to her, a Scythe. Citra says, “Wait…I see something. A towering, sparking storm cloud. Is that what you truly are?” The AI responds, “Merely in the form humanity imagined for me. I would have preferred something a bit less intimidating (333).
What does it mean, “it would have preferred?” Does AI have inner desires? Again, is this author creativity or reality? How is it breaking its protocol to speak with Citra, a Scythe who is forbidden access to the Thundercloud? When Citra asks the Thundercloud this very question it responds, “I am incapable of breaking the law. You are currently dead, Citra. I’ve activated a small corner of your cortex to hold consciousness (334). What a tricky little robot. Further in the conversation, the Thundercloud says, “Your gleaning (death) will be a critical event in the future of Scythedom. However, for your sake, I hope a different, more pleasant future comes about (335). This seems too human of an emotion for this AI to possess. Do AI’s have hope? Desires? Desires beyond their programming? If we look back at Asimov’s construct for robot stories and the need for rules, what rules is the Thundercloud operating under? The explanation from the characters in Shusterman’s novel is that it is objective and fair and is there to replace law enforcement. It has basically replaced god. But is this giving AI too much power? In this book, the AI does serve as an ally. Once again, it’s humans who screw up perfection and peace with their greed and thirst for power. At every dark onset, there is a human desiring something and bending AI to their whim, setting off a chain reaction of destruction. Another good example of this is in Krosoczka’s graphic novel, Lunch Lady and the Cyborg Substitute
In Lunch Lady, Mr. Edison is a snively science teacher who wants to become Teacher of the Year. He is tired of being passed over for the more popular Mr. O’Connell, and uses his science know-how to build a cyborg substitute for Mr. O’Connell. Mr. Edison programs the cyborg to be mean to the kids and assign additional homework. It seems like the plan is going to work. Mr. Edison will be voted teacher of the year. But when the kids complain to the secret superhero lunch lady, she investigates and foils Mr. Edison’s plan. There is a battle that violates Asimov’s Laws of Robotics. The cyborg tells the lunch lady that he will destroy her, a violation of Law One, which tells us that maybe Mr. Edison had different programming goals in mind. Lunch Lady and Cyborg fight hand to robotic arm. Mr. Edison tells the robot to destroy her, which is a violation of Law Two. This behavior is very scary in the world of AI. But because this story is a graphic novel about a lunch lady who defeats villains by tripping them with goopy sloppy joes, it doesn’t feel as menacing as it would in another, more serious story. Ultimately, this story doesn’t follow the Laws of Robotics from Asimov. It follows the laws from author illustrator, Jarrett J. Krosoczka who has chosen to allow epic battles of hand to cyborg combat which lend themselves to epic illustrations. The lunch lady neutralizes the cyborg with her swift moves. But then many more cyborgs take up the charge. In the end, the school child, Hector, says the solution is simple. “I turned them off” (87).
How important is the off-switch? Where will it be located? Who can access it? In Lunch Lady, the accessible off-switch on this story’s AI character solves everything. Specifically, there is a key fob like the one that opens a car which is programmed to shut off all the cyborgs at once. Handy dandy. Maybe we should make a note of that.
In each story, the AI has not created itself. Much like a child is also created by someone else and follows someone else’s rules. This could be a factor in attracting young readers to read about AI in literature. Each AI is initially created by a human. A human with a specific goal or desire. The cyborg substitute was created by Mr. Edison to rig the Teacher of the Year competition. He wanted everyone to like him. He wanted popularity so much that he created an army of fighting cyborgs that could have destroyed the whole world following his petty desires.
In each book examined, there is a clear human desire and a plan laid out to achieve that human desire, but in each story, something goes wrong and the AI has the upper hand. Just watch the movie, The Terminator. When it comes to AI in real life, Elon Musk is quoted in an online article by Gadgets 360 saying that AI’s progress is the “biggest risk we face as a civilisation. AI is a rare case where we need to be proactive in regulation instead of reactive because if we’re reactive in AI regulation it’s too late.”
Which begs to ask the question, is continuing to develop AI wise? Are we engineering our own metallic and silicone hands of extinction? Is it time to burn our computers and go back to our solar-powered calculators? Despite all our human ingenuity, passion, and brilliance, can we admit that we are outsmarted and daresay, outgunned, when it comes to AI? When would it be wise to abort mission? It might be sooner than you think.
Works Cited
Asimov, Isaac. I, Robot. Bantam trade paperback. ed., Bantam Books, 2008.
Cuthbertson, Anthony. Anthony Cuthbertson@ADCuthbertson. Monday 27 July 2020 16:49.
ASILOMAR AI PRINCIPLES.
https://futureoflife.org/ai-principles/ Accessed 5 April 2021.
Dowd, Maureen. The New York Times. “Elon Musk, Blasting off in Domestic Bliss.” Elon Musk: The Maureen Dowd Interview – The New York Times (nytimes.com)
27 July 2020. Accessed 15 April 2021.
Gac, Adam. Human After All – Psychological Development in Robot Characters. Advisor: Amy
King, Summer/Fall 2018.
Gadgets 360. Indo-Asian News Service. https://gadgets.ndtv.com/social-networking/news/facebook-shuts-ai-system-after-bots-create-own-language-1731309.
Accessed on 15 April 2021.
Haddix, Margaret Peterson. Under Their Skin. Simon and Schuster Books for Young Readers,
2017.
Kay, Grace. “Elon Musk’s AI brain chip company Neuralink is doing its first live tech demo on
Friday. Here’s what we know so far about the wild science behind it.” 12 April 2021,
Business Insider. Elon Musk’s Neuralink could transition from implanting chips in monkeys to humans within the year (msn.com). [email protected]. Accessed 22 April 2021.
Krosoczka, Jarrett. Lunch Lady and the Cyborg Substitute. (Lunch Lady, Vol. 1.). Alfred A.
Knopf, 2009.
Schusterman, Neil. Scythe. Simon & Schuster BFYR, 2016.
Tech Desk. New Delhi. The Indian Express. “Elon Musk’s Neuralink plans to merge human
brain with AI for superhuman cognition.” Elon Musk’s Neuralink plans to merge human brain with AI for superhuman cognition | Technology News,The Indian Express. 18 July 2019. Accessed 21 April 2021.
The Terminator. Dir. James Cameron. Orion Pictures,1984. Film.
Wallach, Wendell, and Colin Allen. “Moral Machines : Teaching Robots Right from Wrong.”
Chapter 2 Engineering Morality, Oxford University Press, 2009.