Don’t Be Evil: Google Employees, Academics Call For Boycott Of 'Killer Robots'

An internal letter signed by over 3,000 Google employees asked Google CEO Sundar Pichai to end all projects that may be building warfare technology. At the same time, a group of over 50 academics and top AI experts called on a boycott against Korea Advanced Institute of Science and Technology (KAIST), which plans to develop autonomous weapons, also called “killer robots,” by the end of the year.

The two boycotts come a few days before the United Nations (UN) meeting in Geneva over how to deal with these so-called killer robots.

Google’s Secret Project With The Pentagon

Reports from last month revealed that Google has secretly partnered with the Pentagon (where Eric Schmidt, Google’s former CEO and Alphabet Chairman, has been working for the past two years) to help it develop artificial intelligence for drone operations.

Google worked together with the Pentagon on Project Maven, which was also known as the Algorithmic Warfare Cross-Functional Team (AWCFT). The Pentagon established this project in April 2017 to advance the military’s machine learning capabilities, and more specifically to more easily “identify objects” (which could also include humans) from drone footage.

Google’s involvement in Project Maven was revealed only last month, as details about it became available on an internal mailing list that was then leaked to the press.

Employees Remind Google: “Don’t Be Evil”

Some Google employees seem to have been outraged that the company would offer its resources to further military and surveillance objectives. Others started questioning the ethics around how the company uses its AI technology.

Alphabet, the parent company of Google, had previously set up an ethics board for AI, but the company rejected calls to make the board members transparent to the public. Some argued that this will allow the company to do whatever it wants, or whatever is incentivized by profits, instead of doing the right thing, because the board has no public accountability.

Over 3,000 employees have now told Pichai, the following:

We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.

Diane Greene, one of Google’s board of directors, recently told the outraged employees that Google’s technology “will not operate or fly drones” and “will not be used to launch weapons.” However, in the letter, Google employees seem skeptical about that statement. Even if Google’s leaders are able to maintain that commitment around a narrow set of applications, once the technology is delivered to the military, it can then be repurposed however the government sees fit.

The employees also warned that Google’s decision to become a military contractor could also irreparably harm users’ and programmers’ trust in the company:

This plan will irreparably damage Google’s brand and its ability to compete for talent. Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public’s trust. By entering into this contract, Google will join the ranks of companies like Palantir, Raytheon, and General Dynamics. The argument that other firms, like Microsoft and Amazon, are also participating doesn’t make this any less risky for Google. Google’s unique history, its motto Don’t Be Evil, and its direct reach into the lives of billions of users set it apart.We cannot outsource the moral responsibility of our technologies to third parties. Google’s stated values make this clear: Every one of our users is trusting us. Never jeopardize that. Ever. This contract puts Google’s reputation at risk and stands in direct opposition to our core values. Building this technology to assist the US Government in military surveillance – and potentially lethal outcomes – is not acceptable.

Academics To Boycott KAIST’ “Killer Robots”

More than 50 top AI experts and academics from the University of Cambridge, Cornell University, the University of California, Berkeley and 52 other institutions from 30 countries will boycott all contact with KAIST after the institute opened an autonomous weapons research center with Korean arms company Hanwha Systems. The boycott was announced ahead of the 123-member UN meeting in Geneva next Monday, where the UN members will discuss the issue of increasingly more worrisome prospect of killer robots.

In a statement to Britain’s Times Higher Education magazine website, KAIST said:

The centre aims to develop algorithms on efficient logistical systems, unmanned navigation [and an] aviation training system. KAIST will be responsible for educating the researchers and providing consultation.

Professor Walsh, from the University of New South Wales in Sydney, organized the boycott and told the magazine that KAIST’s partner, Hanwha Systems, has been blacklisted for producing “cluster munitions,” which have been banned under a UN convention. However, South Korea is not signatory of the UN convention.

He added that:

There’s no program that can’t be hacked. Whatever safeguards we put in will be removed by bad actors, North Korea being one of them.

Last year, more than 100 specialists, including Walsh and Elon Musk, the co-founder of Tesla and the OpenAI nonprofit, demanded that autonomous weapons be outright banned in an open letter:

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.

Another international coalition looking to ban autonomous weapons, called the “Campaign To Stop Killer Robots,” also showed a video at the UN Convention on Certain Conventional Weapons that demonstrated how autonomous weapons could be used by malicious parties:

Lucian Armasu
Lucian Armasu is a Contributing Writer for Tom's Hardware US. He covers software news and the issues surrounding privacy and security.
  • 10tacle
    So they are upset about robotics and warfare. How quaint. I find this rather ironic coming from a group of employees who openly support censorship and algorithm biases on the Google search engine regarding politics and spying on users. REAL ironic indeed considering that's how Hitler rose to power.
    Reply
  • stefano.n.giaimo
    Thank goodness we're onto Killer Robots, and have already solved the issue of forced political/ideological indoctrination going on there at Google... Oh wait, it was never brought to light.

    (F) James Damore
    Reply
  • USAFRet
    Google (Alphabet)...the company that bought Boston Dynamics (Big Dog, Cheetah, Little Dog, Atlas, etc, etc) back in 2013.
    And only sold them last year.

    Go on...tell me that Big Dog code does not still exist in Google somewhere.
    Go on...blow some more smoke up my butt.
    Reply
  • popatim
    I believe it's past the point where this will do any good. They will have no problems filling empty seats for any who want to leave and potentially be blacklisted. Once Ai can code well, even those will be relegated to the farming camps, so such.
    Reply
  • JonDol
    Almost like in the Robocop movie. Those academics seem just a bit late. Afaik, Samsung has already deployed those kind of robots to watch the border with the North Korea...
    Reply
  • jimmysmitty
    20857461 said:
    So they are upset about robotics and warfare. How quaint. I find this rather ironic coming from a group of employees who openly support censorship and algorithm biases on the Google search engine regarding politics and spying on users. REAL ironic indeed considering that's how Hitler rose to power.

    I am all for militaries pushing to robotics for warfare instead of having to sacrifice human lives.
    Reply
  • nobspls
    The unfortunate truth here is that even if the "ethical" bans killer robots/AI etc., it is not going to stop Russia or China. If your killer robots do not cut it against theirs, you are screwed, unless you want to invoke the nuke all the time, which is obviously not a viable strategy.

    The day the first machine gun/gattling gun went live, the automated killing of humans came into existence. Industrialized mechanized warfare is plenty horrible as has been witnessed many times. The step into AI powered weapons is just the next step of this progression. Problem is that it is already a century too late to stop.
    Reply
  • Evolution2001
    20857461 said:
    So they are upset about robotics and warfare. How quaint. I find this rather ironic coming from a group of employees who openly support censorship and algorithm biases on the Google search engine regarding politics and spying on users. REAL ironic indeed considering that's how Hitler rose to power.

    Wow... Godwin's Law applied on the very first post!
    A sarcastic tip of my hat to you!
    Reply
  • 10tacle
    20860901 said:
    Wow... Godwin's Law applied on the very first post! A sarcastic tip of my hat to you!

    But the comparison doesn't make it less true no matter how trite and overused. Okay let's change things up. How about Google fan Lenin useful idiots? (Same with Facebook).
    Reply
  • nimbao6
    I believe shaped EMP's outside your home and shielded electronics inside will protect you from weaponized drones... The shielding will be too heavy to put on small drones. As for walking/rolling killer AI's ... well, a good police force will stop those.
    Reply