Someday, Robots May Save or Destroy Us All—For Now, They're Still Kinda Dumb

August 21st, 2015

HRP2-Tokyo

In the fall of 1941, with war raging in Europe and Russia, science-fiction author Isaac Asimov wrote a short story set in a far-distant future. The year is 2015, the location, Mercury. There, a couple of interplanetary miners named Gregory Powell and Mike Donovan are having trouble with a robot nicknamed Speedy, who’s been sent onto Mercury’s furnace-like surface to perform what should be a routine task. Unfortunately, Speedy has become confused by the conflicting protocols programmed into his “positronic” brain, so much so that he’s randomly quoting Gilbert & Sullivan and running around in circles.

Speedy’s conundrum was the result of a flaw in Asimov’s Three Laws of Robotics, which the writer introduced to science-fiction fans in Speedy’s story, “Runaround,” first published in the March 1942 issue of “Astounding Science Fiction” and reprinted in 1950 as one of the nine tales in I, Robot.

In the future, Asimov imagined, all robots would be engineered with the following three mandates written into their source code:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

As it turns out, while attempting to perform the task assigned to him, Law 2, Speedy encounters a danger to his existence (in the story, Speedy has been given an enhanced sense of Law 3), prompting his subsequent chicken-with-its-head-cut-off behavior. To find out how Powell and Donovan wrench Speedy back to robot reality, read the book.

Top: A robot built by Team HRP-2 of Tokyo at the 2015 DARPA Robotics Challenge. Above: Isaac Asimov's Three Laws of Robotics first appeared in a science-fiction magazine before being included in the 1950 book, "I, Robot."

Top: A robot built by Team HRP-2 of Tokyo at the 2015 DARPA Robotics Challenge. Above: Isaac Asimov’s Three Laws of Robotics first appeared in a science-fiction magazine before being included in the 1950 book, “I, Robot.”

In hindsight, it was a gutsy move on Asimov’s part to introduce his Laws to readers by exposing their limitations, but that’s how it’s always been with humans and the robots we appear destined to someday live and work with. How close we are to that “someday,” and why it’s taking longer than Asimov and others thought it would, is the subject of Pulitzer Prize-winning “New York Times” journalist John Markoff’s Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots (Ecco, 2015).

“The robotics community is less tortured by existential issues.”

Markoff’s book arrives at an important moment in the humans-versus-robots debate. Coincidentally, 2015, the year “Runaround” is set in, is also turning out to be the year when a lot of humans are having Speedy conundrums of their own, torn by the promise of robots to help humankind and the seemingly more tangible worry that the very robots we are working so feverishly to create will get a little too smart, rise up, and wipe us all out.

“The notion that machines could become self-aware enough to decide that humanity really doesn’t suit their purposes, and that they would be able to destroy us, is the science-fiction fear,” Markoff told me recently. “I think there are lots of other much more real fears to be concerned about. I’m not one who believes that every new technology is inevitable,” he adds. “I think we can back away from technologies, and perhaps put limits on certain things as a society.”

The Atlas robot made by Boston Dynamics, one of several companies purchased by Google in the first of half of December 2013.

The Atlas robot made by Boston Dynamics, one of several companies purchased by Google in the first of half of December 2013.

Today, the robots being manufactured by companies like Boston Dynamics, Schaft, Industrial Perception, Bot & Dolly, Redwood Robotics, and Meka Robotics (all owned by Google) don’t require such limits—yet. Indeed, robots in 2015 are nowhere near as advanced, physically or cognitively, as the ones of Isaac Asimov’s mid-20th-century imagination. In the very near future, though, perhaps within a generation, robots that will make Speedy look no smarter than a Roomba will be everywhere, for better or worse.

A follow-up of sorts to What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry (Penguin, 2005), Machines of Loving Grace will not ease anyone’s ambivalence about the apparently inevitable rise of the machines, even if the outcome is not a robot-led extermination of our species. Indeed, Markoff’s book will probably heighten our unease thanks to the realistic portrait it paints of an industry that so many of us only know through fiction, be it on the page or screen. In Machines of Loving Grace, Markoff puts readers in the passenger seats of driverless cars; he takes them behind the scenes of the Defense Advanced Research Projects Agency’s (DARPA) Robotics Challenge; and he invites us inside an entirely automated manufacturing facility, where 30 electric shavers a minute are assembled by 128 robotic arms, each programmed to perform a specific, precise, assembly task.

Google's driverless car will probably not be ready for years, but the technology that keeps it from crashing is already being used in new conventional vehicles.

Google’s driverless car will probably not be ready for years, but the technology that keeps it from crashing is already being used in new conventional vehicles.

Reading these accounts, and then extrapolating on what they might suggest for our future, is often chilling, which is why it’s such a good thing we have those Three Laws of Robotics to guide us, since fictional or not, they’re probably push-pinned to the walls of robotics research labs from Palo Alto to Boston, from Tokyo to Seoul. Right?

“I’m not one who believes that every new technology is inevitable.”

Wrong, says Markoff. In fact, when I asked him about Asimov’s Three Laws of Robotics, Markoff could only recall one conversation in the course of all his years of reporting on robots when the subject even came up. That was with Andy Rubin, the guy who bought all those robotics companies for Google over the course of eight days in December of 2013. “He basically dismissed them,” Markoff recalls of Rubin’s assessment of the impact of Asimov’s Laws on the day-to-day work of contemporary roboticists. “He said you can’t think about this stuff, you have to sort of assume robots are going to be better for humanity, which wasn’t an answer, but it’s clearly a perspective. The robotics community is very passionate about the machines they’re building, but less tortured by existential issues.”

Naturally, this nonchalance has precedence, as Markoff illustrates in a pair of contrasting anecdotes in his book’s final chapter, “Masters, Slaves, or Partners?” The first anecdote concerns a meeting of biotech leaders in 1975 at the Asilomar Conference Grounds near Monterey, California. One of the scientists who called the meeting, a Nobel laureate named Paul Berg, was worried that biochemists like him might “unintentionally bring about the end of humanity by engineering a synthetic plague,” as Markoff puts it. Accordingly, the group imposed a ban on itself by limiting certain types of research and designated a committee at the National Institute of Health to decide when measures were in place to make sure such research would be safe. About a decade later, the NIH lifted the biotech community’s self-imposed moratorium.

In this scene from the 1999 film "Bicentennial Man," based on a 1976 Asimov story, a robot named Andrew (played by Robin Williams) explains the Three Laws of Robotics to his human family.

In this scene from the 1999 film “Bicentennial Man,” based on a 1976 Asimov story, a robot named Andrew (played by Robin Williams) explains the Three Laws of Robotics to his human family.

No such sense of caution plagued a subsequent meeting—also, coincidentally, at Asilomar—in 2009, when AI researchers and roboticists came together to discuss concerns ranging from the likelihood that artificial intelligence will surpass the organic variety in 2023 (per author, inventor, futurist, and current Google Director of Engineering, Ray Kurzweil), or that “the new Pandora’s boxes of genetics, nanotechnology, and robotics are almost open, yet we seem hardly to have noticed” (per Sun Microsystems co-founder, Bill Joy, who sounded this alarm as part of a lengthy article in the April 2000 issue of “Wired”). In contrast to the biotech crowd of 1975, the AI folks of 2009 took no action after their Asilomar meeting, complaining instead that the pace of progress in their field had been “disappointing.”

In fairness, compared to biotech, the pace of robotics and AI has been positively painful. “AI has been a field of overpromise since 1956,” says Markoff, referring the year of the Dartmouth Summer Research Project on Artificial Intelligence, which was co-organized by Marvin Minsky and John McCarthy—the two men co-founded MIT’s Artificial Intelligence Lab in 1958, and McCarthy would go on to found the Stanford Artificial Intelligence Lab, or SAIL, in 1962.

The late John McCarthy, seen here at the Stanford Artificial Intelligence Lab in 1974, was one of the most influential scientists in the field.

The late John McCarthy, seen here at the Stanford Artificial Intelligence Lab in 1974, was one of the most influential scientists in the field.

“Probably the earliest published example of overstatement was a UPI story in the ‘New York Times’ in 1958,” Markoff continues. As he describes the news item in Machines of Loving Grace, “The article was an account of a demonstration given by Cornell psychologist Frank Rosenblatt, describing the ‘embryo’ of an electronic computer that the Navy expected would one day ‘walk, talk, see, write, reproduce itself and be conscious of its existence.’” The Navy, Markoff writes, figured its “thinking machine” would be built within a year, at a cost of $100,000.

“AI has been a field of overpromise since 1956.”

By the time McCarthy got to SAIL, this sense of AI optimism, while somewhat more measured, was no less strong. “McCarthy believed that thinking machines or working AI would take a decade,” Markoff told me. Obviously, that did not come to pass. “In the last half decade, AI has made tremendous progress—self-driving cars, speech recognition, vision—but my sense is that there’s still this tendency to overpromise.” The truth, Markoff says, is probably closer to what people saw at this summer’s DARPA Robotics Challenge. “A lot of robots fell down,” he says. “Only a few of them could do anything, and none very quickly. So there’s still a long way to go. It doesn’t seem like we’re on that super-accelerated path that some people have predicted.”

Tell that to physicist Stephen Hawking, entrepreneur Elon Musk, and Apple co-founder Steve Wozniak, who are among the almost 20,000 signatories to a letter published by the Future of Life Institute calling for a ban on “autonomous weapons,” which means weapons that have no human operators, even remote ones, as with drones.

Doug Engelbart, seen here in a still from a video of "The Mother of All Demos," 1968, believed computers should be used to augment the human intellect rather than replace it.

Doug Engelbart, seen here in a still from a video of “The Mother of All Demos,” 1968, believed computers should be used to augment the human intellect rather than replace it.

“If any major military power pushes ahead with AI weapon development,” the letter warns, “a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity.”

Naturally, some will argue that autonomous weapons could also be used to take out barbaric enterprises such as the Islamic State, but for the signatories of the autonomous-weapons letter, this is not why they got into AI and robotics. “There are many ways in which AI can make battlefields safer for humans, especially civilians,” the letter states, “without creating new tools for killing people.”

Battle droids from "The Phantom Menace." Organizations such as the Future of Life Institute are pushing to ban the development of non-fictional versions of these types of autonomous weapons.

Battle droids from “The Phantom Menace.” Organizations such as the Future of Life Institute are pushing to ban the development of non-fictional versions of these types of autonomous weapons.

If most people can agree, though, that creating armies of battle droids, a la “Star Wars,” is a terrible idea, Markoff says that the concern about AI and robotics actually goes a good deal deeper. “The standard concern expressed by Hawking and Musk,” he says, “is that this technology will unleash the demon, becoming, in some way, an existential threat to humanity.”

In fact, that was exactly the word used by Musk in a speech last year at MIT. “With artificial intelligence,” Musk said, “we are summoning the demon. I’m increasingly inclined to think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish.”

The trouble is, one culture’s foolish is another’s brilliant. Take Xiaoice, the text-based chatbot developed by Microsoft for use in China, which Markoff wrote about recently in the “New York Times.” “It’s a very popular program,” he told me, “with 20 million registered Chinese users. People have long conversations on their cell phones with the program. A typical conversation might have 17 interactions, and 25 percent of users have said ‘I love you’ to the program. Originally, I was freaked out about this, but when I talked to a Chinese researcher, she said you don’t get it. In China, we’re always in contact with other people. Her perspective was that this software is a way to get private space in a society where there’s very little privacy.”

In the 2004 film version of "I, Robot," which is based on an Isaac Asimov story, a rogue robot named Sonny (left) is caught violating the Three Laws of Robotics.

In the 2004 film version of “I, Robot,” which is based on an Isaac Asimov story, a rogue robot named Sonny (left) is caught violating the Three Laws of Robotics.

As a text app on a smartphone, Xiaoice is not designed to replace human contact, although some of the people Markoff spoke to for his “Times” article were concerned about exactly that (“We’re forgetting what it means to be intimate,” sniffed one MIT social scientist). Rather, Xiaoice is a tool designed to augment the human experience. We experience intelligence augmentation, or IA, every day in the form of personal computers, smartphones, and the high-tech sensors in everything from Musk’s Teslas to Markoff’s new Volvo. By any measure, here in the real 2015, IA appears to have won.

“Well, you’re right,” Markoff says, “But if you were around in the 1960s and ’70s, that was not the perception. Serious computer science was thought to be in AI, and the guys who were pushing augmentation, like Doug Engelbart at the Stanford Research Institute, were considered curious people who were playing around with word processors.”

To his great dismay, Engelbart, who died in 2013, became best known for inventing the mouse, which he considered little more than a rudimentary tool to make computers accessible to people other than computer scientists. When Engelbart looked outside his lab’s window, he saw a world in serious trouble, whose problems were increasing at a very steep rate. The ability for humans to respond to these problems was also increasing, but at a rate that was much closer to flat. The human species, Engelbart argued, needed technology to augment its intelligence so that the curves were at least roughly equal. Thus, for Engelbart, the reason to pursue the intelligence-augmentation potential of computers was literally to help people change the world—he was never interested in building machines that would do it for us.

For most of us, robotics in 2015 means a device like a Roomba.

For most of us, robotics in 2015 means a device like a Roomba.

Engelbart’s vision of personal computers connected by vast networks, both of which we take entirely for granted today, eventually came to pass, even if the application of his vision ended up being almost entirely focused on commerce rather than altruism. “Doug’s work had a much bigger impact on the world than AI did,” Markoff says. “But here we are, 40 or 50 years later, and AI is having a renaissance.”

One of the reasons for this renaissance also has its roots in the early days of computer science, specifically in the year 1965, when Fairchild Semiconductor’s director of research and development, Gordon Moore, predicted that in the following 10 years, the number of components engineers would be able to squeeze into an integrated circuit would double annually, a prediction that became known as Moore’s Law. The resulting increase in processing power, accompanied by sharp drops in cost, helped IA explode, but it’s only been recently that Moore’s Law has had a similar impact on AI—with its mission to essentially replicate the function of the brain by creating artificial neural networks, AI has been a far tougher nut to crack.

The DRC-HUBO robot, created by South Korea's Team Kaist, won the 2015 DARPA Robotics Challenge.

The DRC-HUBO robot, created by South Korea’s Team Kaist, won the 2015 DARPA Robotics Challenge.

Just this year, Gordon Moore made another prediction, and if he’s right again, the task of building true AI will eventually hit a technological wall. “We won’t have the rate of progress that we’ve had over the last few decades,” Moore said recently in an interview with the Institute of Electrical and Electronics Engineers. “I think that’s inevitable with any technology; it eventually saturates out. I guess I see Moore’s Law dying here in the next decade or so, but that’s not surprising.”

“Markoff’s book arrives at an important moment in the humans-versus-robots debate.”

Even so, another decade of growth at the velocity of Moore’s Law is, well, just a whole lot of growth, which is why people like Ray Kurzweil believe we will eventually get to what’s called the singularity (Kurzweil puts that date at 2045), a somewhat sinister term coined in 1958 by mathematician John von Neumann. The singularity, which Markoff described to me as “the unofficial religion of Silicon Valley,” is the moment in human history when robots will be endowed with sufficient artificial intelligence to become self-aware, improving themselves exponentially and leaving humankind in the dust, just like the Scarlett Johansson-voiced operating system does to Joaquin Phoenix at the end of the film “Her.” “That was their way of encapsulating the singularity issue,” Markoff says of the filmmakers. “The machine intelligence so outruns ours that we bore them.”

In the meantime, Markoff believes there are plenty of pressing issues regarding AI and robots that need our attention now. “I’m quite pleased to see that [MIT physics professor] Max Tegmark, who’s part of the Hawking crowd and co-founded the Future of Life Institute this summer, has begun to focus on the issue of autonomous weapons,” he says. “These things are existential threats, so we as a society can prohibit them.”

Simon, one of several cute robots designed by Meka Robotics, also owned by Google.

Simon, one of several cute robots designed by Meka Robotics, also owned by Google.

Indeed, if there’s a bottom-line message in Markoff’s book about robots, this may be it—that humans still have a say in what our robot-filled future will look like. “All these machines are being designed by humans,” Markoff says. “Larry Lessig, who’s running for president, says that code is law, but I believe that code is culture, too. These machines are expressions of human culture, so they are embedded with our values. Human designers can still make a difference.”

Which brings us back to Isaac Asimov and his Three Laws of Robotics. Later in his career, in 1985, in a book called Robots and Empire, Asimov added a so-called Zeroth Law to his original three, which was given the number zero so that it would be clear it was the most important of all. It read:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

With all due respect to the memory of the great science-fiction author, I’d like to suggest a Zero-Zeroth Law for all those AI developers out there:

00. Read your Asimov.

Machines of Loving Grace cover
(If you buy something through a link in this article, Collectors Weekly may get a share of the sale. Learn more.)

4 comments so far

  1. Robert Claypool Says:

    As I recall, Speedy’s problem was not due to a flaw with the three laws, but that in his implementation, preserving himself could override obeying an order. However, they had to jump through some hoops to communicate to him that the first law was in play.

  2. Rob Says:

    Interesting and informative as always. FWIW … I’m not an AI developer but already Zero-Zeroth compliant!

  3. Buster Mauzey Says:

    It is a mistake to think that any artificial intelligence, particularly one that arose accidentally, would be anything like us and would share our values, which are the product of evolution in a social setting. This doomsday scenario doesn t necessarily have to mean a genocidal robot attack, like those launched by Skynet in the Terminator movies.

  4. dCrisser Says:

    Any technology that would be used for good will be beneficial and use it for bad it would destroy. It is a matter of what humans aim for the technology, especially the robots.


Leave a Comment or Ask a Question

If you want to identify an item, try posting it in our Show & Tell gallery.