top of page

SETTING THE ETHICAL RULES OF A.I.


The movies have had plenty of memorable artificially intelligent beings that wanted to kill us.

There was, of course, red-eyed HAL 9000 in “2001: A Space Odyssey,” the 1968 movie. Fourteen years later, along came Roy Batty, the philosophical replicant searching for a way to live in “Blade Runner.” Two years after that, we were introduced to the killer machines spawned by Skynet in “The Terminator” series. And, of course, we should not forget the malignant machines and programs of “The Matrix,” a reminder that if we one day scorch the skies to deprive robots of solar power, we could all be turned into batteries.

Entertainment value aside, a Stanford study on the future of artificial intelligence will ignore all of science fiction’s warnings about our A.I. creations. Instead, it will focus on more tangible things, like robots stealing people’s jobs or driving workers to and from the office.

It makes sense. While self-driving cars are still figuring out how to deal with four-way stops at intersections, artificially intelligent robots turning on their creators still seems a ways out.

The Stanford initiative coincides with an effort by five of the world’s largest technology companies to create an ethical framework around the creation of artificial intelligence. The specifics are still in the air. But in general terms, the people putting the consortium together want to ensure that A.I. will be used to benefit humans, not harm them.

A.I. is already finding its way into the real world, from the voice recognition of Amazon’s Echo device to the self-driving car projects of companies like Google, to the military, where weapons are beginning to think on their own.

But those weapons — at least for now — seem to come with an important caveat: Killing decisions must be left to humans.


FOLLOW ME

  • Black Instagram Icon
  • Black Facebook Icon
  • Black Twitter Icon
  • Black YouTube Icon

POPULAR POSTS

bottom of page