Artificial Intelligence: Cure for What Ails Us, or Looming Threat to the World?
From biological machines to superintelligence.
Two scientific papers impressed me this week, both in the field of artificial intelligence (AI). The first is by researchers led by Sam Kriegman at the University of Vermont, who present a method for designing “biological machines” from the ground up. They emphasize the potential good this might do by allowing the creation of “living machines” that safely deliver drugs inside the human body or assist with cleaning up the environment.
The other paper is a collaboration of the Max Planck Institute in Germany with the Autonomous University of Madrid, Spain. Led by Manuel Alfonseca, the authors claim that based on computability theory, a superintelligent AI cannot be contained, and thus poses a threat to all of us.
Kriegman’s team calls their biological machines “xenobots,” because the cells used to build them derive from the African clawed frog (Xenopus laevis). The xenobots consist of 500 to 1,000 living cells that can move in various directions and, when joined together, can push small objects. The researchers programmed a computer to automatically design simulated biological machines, then built the best designs by combining different biological tissues. The program used an AI “evolutionary algorithm” to predict which xenobots would likely perform useful tasks. The noble idea behind this method is that “reconfigurable biomachines” could vastly improve human and environmental health—for example, by cleaning microplastics from the ocean.
But such biomachines raise many ethical concerns. Although they bear little resemblance to organisms or even individual organs, they are clearly alive. For example, they have the ability to repair themselves. (Am I the only one who finds that a bit creepy?) What if they go rogue and interact with the environment in a different way than intended, possibly in harmful ways? Because they are life forms and not mechanical robots, I think it will be difficult to predict how they will interact with the environment. How would we control them? A kill switch? There’s potential for lots of good, but also for lots of danger.
The second paper considers the important question of whether we can, even in principle, control AI, especially a superintelligent AI. Kriegman’s biological machines would not be expected to become superintelligent, but as Alfonseca and colleagues point out, there already are machines that perform advanced tasks independently, without programmers fully understanding how they learned it. Let’s go one step further and imagine the AI connecting to the internet and absorbing all the knowledge it contains. How could a human control, let alone stop an entity that is, in the words of philosopher Nick Bostrom, “smarter than the best human brains in practically every field.”
One way to protect ourselves could be to wall the AI off from the internet. But that defeats the purpose of what the machine is designed to do. Alfonseca considers a different option—using a containment algorithm to guard against the AI becoming a threat. Unfortunately, they conclude that this would be impossible—no single algorithm could determine whether a superintelligent AI might cause harm to the world.
Other possibilities have been considered. We could give all AIs an ethical and moral underpinning—like Isaac Asimov’s famous three laws of robotics. But there’s an even worse scenario: What if the superintelligent AI decides that our species is inherently dangerous, and that the best solution is to just stop us from doing more harm? Science fiction writers have long grappled with this problem, as in Jack Williamson’s 1947 novelette With Folded Hands, in which AI “humanoids” relegate our species to sitting around “with folded hands”—so we can’t hurt anything.
Needless to say, these are difficult questions. Computer scientists, philosophers, and science fiction authors will have their hands full exploring them—and lawmakers need to be ready to react—as AI continues to advance.