'Killer robots': Top scientists boycott South Korean lab over AI fears

A Sydney-based university professor has led a boycott against a South Korean university amid fears it was opening an "AI weapons lab".

The world's first operational police robot stands at attention as they prepare a military cannon to fire in Downtown Dubai on May 31, 2017.

The world's first operational police robot stands at attention as they prepare a military cannon to fire in Downtown Dubai on May 31, 2017. Source: Getty

Some of the world’s leading artificial intelligence experts have boycotted South Korea’s top university after it opened what they called an "AI weapons lab".

More than 50 academics, including two from Australia, signed a letter calling for a boycott of South Korea's public research university KAIST after it opened the lab with defence manufacturer Hanwha Systems.

"It has been reported that the goals of this centre are to 'develop artificial intelligence (AI) technologies to be applied to military weapons, joining the global competition to develop autonomous arms," an open letter to KAIST President Professor Sung-Chul Shin read.

The researchers, based in 30 countries, said they would refrain from visiting KAIST, hosting visitors from the university, or cooperating with its research programs until it pledged to refrain from developing AI weapons without "meaningful human control".
KAIST opened the Research Centre for the Convergence of National Defence and Artificial Intelligence in February with Hanwha Systems, one of two South Korean makers of cluster munitions.

The company responded to the boycott, saying it had "no intention to engage in development of lethal autonomous weapons systems and killer robots."

The university said the new research centre would focus on using AI for command and control systems, navigation for large unmanned undersea vehicles, smart aircraft training, and tracking and recognition of objects.
A robot distributes promotional literature calling for a ban on fully autonomous weapons in Parliament Square on April 23, 2013 in London, England.
A robot distributes promotional literature calling for a ban on fully autonomous weapons in Parliament Square on April 23, 2013 in London, England. Source: Getty
Professor Sung-Chul Shin said the university was "significantly aware" of ethical concerns regarding artificial intelligence, adding: "I reaffirm once again that KAIST will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control."

The boycott was organised by a University of New South Wales professor, Toby Walsh.

The letter, also signed by top experts on deep-learning and robotics, was released ahead of next Monday’s meeting in Geneva by 123 UN member countries on the challenges posed by lethal autonomous weapons, which critics describe as "killer robots".

"If developed, autonomous weapons will be the third revolution in warfare. They will permit war to be fought faster and at a scale greater than ever before. They have the potential to be weapons of terror. Despots and terrorists could use them against innocent populations, removing any ethical restraints," the open letter read.

"This Pandora's box will be hard to close if it is opened. As with other technologies banned in the past like blinding lasers, we can simply decide not to develop them. We urge KAIST to follow this path, and work instead on uses of AI to improve and not harm human lives."

Professor Walsh said he felt the boycott had succeeded as KAIST had quickly responded and made two significant concessions.

"KAIST has made two significant concessions: not to develop autonomous weapons and to ensure meaningful human control," he said, adding that the university’s response would add weight to United Nations discussions taking place next week on the overall issue.
A samurai robot, produced by Japan's robot venture Crafthouse, lowers a spear.
A samurai robot, produced by Japan's robot venture Crafthouse, lowers a spear. Source: Getty
Professor Walsh said he needed to speak with all those who signed the boycott letter before he called it off.

He said it remained unclear how one could establish meaningful human control of an unmanned submarine - one of the launch projects - when it was under the sea and unable to communicate.

AI is the field in computer science that aims to create machines able to perceive the environment and make decisions.

Professor Walsh told Reuters there were many potential good uses of robotics and artificial intelligence in the military, including removing humans from dangerous task such as clearing minefields.

"But we should not hand over the decision of who lives or dies to a machine. This crosses a clear moral line," he said.

"We should not let robots decide who lives and who dies."

- Additional reporting Natasha Christian 


Share
4 min read
Published 5 April 2018 6:58pm
Updated 5 April 2018 7:01pm
Source: Reuters


Share this with family and friends