Researchers Try To Preemptively Imagine The Worst Things AI Could Do
Kelsey D. Atherton
at 11:19 AM May 25 2016
For a start, it could clash with the furniture
Duncan Hull, via Flickr, CC BY 2.0

Fiction is full of evil robots, from the Cylons of “Battlestar Galactica” to the vengeful replicants of Blade Runner to the iconic, humanity-destroying Terminators. Yet these are all robots built from good intentions, whose horrific violence is an unintended consequence of their design, rather than the explicit point.

What if, instead of human folly, an artificial intelligence caused harm because a human explicitly designed it for malicious purposes? A new study, funded in part by Elon Musk, looks at the possibilities of deliberately evil machines.

Titled “Unethical Research: How to Create a Malevolent Artificial Intelligence,” by Roman V. Yampolskiy of the University of Louisville and futurist Federico Pistono, the short paper looks at just what harm someone could do with an actively evil program. Why? For a similar reason that DARPA asked people to invent new weapons in their backyards: better to find the threat now, through peaceful research, than have to adapt to it later when it's used in an aggressive attack.

What did Pistono and Yampolskiy find? Their list of groups that could make a vicious A.I. starts familiar: military (developing cyber-weapons and robot soldiers to achieve dominance), governments (attempting to use AI to establish hegemony, control people, or take down other governments), corporations (trying to achieve monopoly, destroying the competition through illegal means); and continues to include black hat hackers, villains, doomsday cults, and criminals, among others. And the A.I. could come from many places. According to the authors, code written without oversight, or closed-source programming designed to be seen by as few eyes as possible, are both ways to make a harmful artificial intelligence without warning the world first.

Okay, but what does the malicious A.I. actually do that causes problems? Undercut human labor, say Pistono and Yampolskiy:

By exploiting the natural tendency of companies to want to increase their productivity and profits, a [malicious AI] could lend its services to corporations, which would be more competitive than most, if not all humans workers, making it desirable for a company to employ the quasi-zero marginal cost A.I., in place of the previously employed expensive and inefficient human labor.

Automation replacing jobs is already a well-anticipated fear. So well anticipated, in fact, it's the central plot of R.U.R., the Czech play that introduced the word “robot” to the English language in 1921.

What other fears do Pistono and Yampolskiy see in A.I.? If it's not seizing the means of production, it's taking over government through a propaganda coup, taking over legislative bodies through careful funding, or wiping out humanity through either a newly engineered pathogen or existing human nuclear stockpiles. In other words, the worst an actively malevolent A.I. could do to humanity is nothing worse than what humans have already done or threatened to do to themselves hundreds of times before.

That's not exactly a cheery picture of the future, but it should at least provide a cold, metallic sliver of comfort: we have nothing to fear from machines that we shouldn't already fear from ourselves.

[via New Scientist]

comments powered by Disqus
Sign up for the Pop Sci newsletter
Australian Popular Science
PopSci Live