Artificial Intelligence: Promise or Peril?

Can AI Be Controlled?

The growing role of AI has many concerned. As Douglas Heaven asked in a New Scientist article: “The danger is that we give up asking questions. Could we get so used to choices being made for us that we stop noticing? The stakes are higher now that intelligent machines are beginning to make inscrutable decisions about mortgage applications, medical diagnoses and even whether you are guilty of a crime” (“Not like us: Artificial minds we can’t understand,” August 2013).

Researchers are increasingly asking these questions based on the realization that modern artificial intelligence, unlike human intelligence, relies on the processing and comparison of vast amounts of data in volumes no human being could ever absorb. So, if an AI were to misidentify an innocent person as the perpetrator of a crime, would human beings even begin to have the capacity to find the flaw in its artificial “reasoning”? In a world where artificial intelligence is being used to make more and more complex decisions, this is an important question. Indeed, it is becoming a matter of life and death—potentially on a worldwide scale.

For instance, artificial intelligence is increasingly being considered a necessary ingredient on the battlefields of war. Many naval warships already carry Phalanx CIWS (Close-In Weapon Systems) that both identify and engage threatening targets—such as enemy drones or missiles—automatically, based on artificial intelligence programing. Similarly, unmanned planes are being programmed with autonomous decision-making ability, increasingly removed from the involvement of a human controller.

Some believe self-controlled, artificially intelligent robotic weapons systems—programmed to kill enemy combatants automatically without first seeking permission from a human being—will actually save lives, removing human soldiers from the frontlines of battle and operating with greater-than-human accuracy. As one former member of the Israeli Defense Force has argued, “If the goal of international humanitarian law is to reduce noncombatant suffering in wartime, then using sharpshooting robots would be more than appropriate, it would be a moral imperative… Battlefield robots might yet be a great advance for international humanitarian law” (Erik Schechter, Wall Street Journal, July 10, 2014).

And, as The Economist suggested, “Given the propensity for human error in such circumstances, mechanised grunts might make such calls better than flesh-and-blood officers. The day of the people’s—or, rather, the robots’—army, then, may soon be at hand” (“No command, and control,” November 25, 2010).

Leave a Reply

Your email address will not be published. Required fields are marked *