Pando

Tech and Science leaders confront strong AI with bold PR

By Dan Raile , written on July 30, 2015

From The Odd Futurism Desk

Lord knows that big ethical stances and declarations of principle are in short supply in Silicon Valley, 2015.

Earlier this week in Buenos Aires such a position was taken splashily, in the form of an open letter – Autonomous Weapons: an Open Letter from AI & Robotics Researchers – urging for containment of autonomous weaponry, signed and authored by members of the Artificial Intelligence research and commercial use communities.

The letter calls on “major military powers” to install a preemptive “ban on offensive autonomous weapons beyond meaningful human control.” The sixteen-hundred researchers and seventy-five hundred other signatories to the letter, hosted on the website of Elon Musk-funded Future of Life Institute, include many popular scientific and industry figures – Musk, Steven Hawking, Steve Wozniak, Noam Chomsky – as well ”The Big Three” AI researchers in Yann Lecun, Yoshua Bengio and Geoff Hinton, who now work at Facebook, IBM and Google respectively, and only fairly recently.

The letter presents the argument against autonomous weapons in broad strokes, clearly targeting a mainstream audience. Sure enough, publications from Fast Company to the Daily Mail responded with headlines like ELON MUSK, STEPHEN HAWKING WARN OF POTENTIALLY DEVASTATING "AI ARMS RACE" and Don’t let AI take our jobs (or kill us): Stephen Hawking and Elon Musk sign open letter warning of a robot uprising

But a close read of the letter shows that the authors manage to both suggest an overstated case for the imminence of strong AI in the quasi-religious tone of Singularitarianism and simultaneously undersell the reality of autonomous weapons currently deployed in the world. It’s at best symbolic, and at worst a bold and glossy bit of publicity-mongering to draw sympathy for the commercial AI applications championed, led and funded by its authors and signatories.

And, while the headlines focussed on Musk and Hawking, what’s just as interesting is who didn’t sign, including:

  • D. Scott Phoenix and Dileep George, founders of Vicarious, mysterious AI startup that has raised $72 million from the likes of Musk, Mark Zuckerberg, Dustin Moskovitz, Aaron Levie and VC’s from Vinod Khosla, Sam Altman and Peter Thiel to Ashton Kutcher. Vicarious says it intends to create the next generation of AI algorithms “[w]ith a little help from neuroscience and biology.”
  • The CEO’s of large companies with AI departments and products, such as Zuckerberg, Larry Page, and Bill Gates (who has raised alarms about advances in AI quite publicly. A few prominent Microsoft AI researchers, including Eric Horvitz, did sign).
  • VC’s invested in AI including Altman, Thiel, Khosla, Steve Jurvetson, anyone from the Omidyar Network and Ron Conway, who loves symbolic, publicity-mongering letters.
  • Andrew Ng, former Google Brain project leader, now at Baidu
  • Ray Kurzweil
  • Any one at all from DARPA

Human history is packed with fears about potent, intelligent technology, stretching back into prehistory and the Golem. Even hysteria around modern AI is not particularly modern. Consider this paragraph from the Musk letter….

Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

And now this from the The New York Times on July 13, 1958:

The Navy last week demonstrated the embryo of an electronic computer named the Perceptron which, when completed in about a year, is expected to be the first non-living mechanism able to “perceive, recognize and identify its surroundings without human training or control.” Navy officers demonstrating a preliminary form of the device in Washington said they hesitated to call it a machine because it is so much like a “human being without life.”

The cycle of hype and subsequent disappointment around AI is so regular, the term “AI winter” was coined in 1984 to describe the recurring phenomenon.

So have we finally arrived at the cusp of the Singularity and the realization of strong AI? There are some strong arguments against this. Anyway, that question remains is juuuust outside the scope of the letter, which focuses instead on the use of “Lethal Autonomous Weapons Systems.”  

The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.

The wording is certainly deliberate and leaves the door slightly ajar as to what should be construed as autonomy, choosing only to point fingers at “offensive” autonomous weapons operating outside of “meaningful” human oversight. The letter takes care to exclude cruise missiles and piloted drones from the ranks of technologies that ought to be forbidden, though these increasingly make certain decisions independently of human operators.  

This recalls a similar qualification from a 2012 Pentagon directive on the difference between autonomous and semiautonomous weaponry. According to that document appropriate weaponry is “designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force [emphasasis added].”

Some argue that a new generation cruise missile being developed by Lockheed Martin, with DARPA funding – the Long-Range Antiship Missile, designed for the contingencies of war in the South China Sea, has already crossed that line. But the exact level of human involved in that missile’s judgment is classified information.

Elsewhere in the world – in Israel, Norway and Great Britain – weapons that autonomously select and engage their targets are already deployed (and sold).

Of course, one could argue that even landmines are (extremely dumb) autonomous weapons – but since they are likely construed as “defensive” wouldn’t fall under the FLI ban anyway. (Fortunately, 162 states have signed the Ottawa Treaty banning them – just not Russia, China or the US.)

Suffice it to say, an arms race is already well underway in AI. It may be at least as old as the Cold War Perceptron. In an email to Pando, Future of Life Institute scientific advisor Stuart Russell conceded that the letter might be anachronistic:

We should have started the policy discussions ten years ago. Maybe it's too late now. The urgency is because technology has moved quickly and made feasible a lot of ideas for weapons that were just speculative a few years ago.

Quick movement in the field is by no means limited to the military. In recent years the private sector has once again taken up an active interest in machine learning, with lots of investment, acquisition and big hires. Perhaps ironically, the signatories and authors of the letter are the people best-placed to know the full extent of current commercial AI capability, but are bound by corporate loyalty and trade secrets to keep this insight private. So we’re left to take them at their word that the technology is imminent and fearsome, and be grateful for their advocacy.

Certainly the popularity of science fiction, and the hype around this most recent letter, suggests we want to believe. This can cause us to overestimate the intelligence of machines we encounter. The pioneering computer scientist Joseph Weizenbaum observed in his 1976 book “Computer Power and Human Reason” that early users of his ELIZA natural language processing program readily believed that they were conversing with an intelligent machine.

This reaction to ELIZA showed me more vividly than anything I had seen hitherto the enormously exaggerated attributions even a well-educated audience is capable of making, even strives to make, to a technology it does not understand.

The Future of Life Institute doesn’t discourage this sort of missaprehension. It may have decided resistance was futile. New York Times Senior Science Writer John Markoff recounted the conversation at a recent dinner held by the FLI:

“[FLI co-founder] Max Tegmark was jumping on the press for using images of The Terminator in coverage of the Institute. And I sort of called bullshit on that. I mean, they were talking about AI as an existential threat to humanity, so how is the Terminator not a way to convey that? I thought it was kind of weird, like, “step up to it.” And with this letter, they’ve done that,” Markoff said Tuesday by phone.   

In fact, much of the current wave of recursive algorithms underlying machine learning has been open-sourced, a situation very different from, say, the development of nuclear arms. This poses its own problems. Take a stroll through the Youtube search results for “Syrian remote control machine gun,” and you’ll probably get sense for some of the frightening, autoplaying possibilities.

For the most part contemporary AI is relatively non-threatening, deployed all around us in the software that facilitates our lives, from the predictive directions in maps, to recommendation engines to advertising. At times, the shortcomings of these new mathematical intelligences has been embarassing, but mostly we remain unaware of their operations.

As sentient thought-leader [and Pando investor] Marc Andreessen Tweeted on Monday, after the open letter was announced:

 Current commercial AI is weak despite its high sexy profile, but to take the letter at its word that is more feature than bug. Perhaps then, the letter is a highly public form of expectation jujitsu.

A cynical observer might ask why this initiative is happening now in just this way: the specter of militant AI is bad for publicity in an age when commercial AI reaches quietly deeper into our lives, in order to better unlock our value as users.  Bold statements of purpose are, of course, very good publicity. The letter takes some account of this banality:

...most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.

 Future social benefits are not enumerated, though beneficial military uses are.

 Machiavelli famously said it is safer to be feared than loved, but he also said it was best, though difficult, to be both. The FLI seems to be rising to the challenge.

Since both AI and the Internet–where many on the list of signatories made their initial money– first arose from military-funded government research, this week’s letter is both a bold bite at the feeding hand and an intransigent public display by a triumphant private sector, boosting its own profile at the military’s expense, with academic defectors in tow. Google isn’t giving back its self-driving car technology or research teams.

There may also be a policy and publicity arms race happening around AI – at some point governments are sure to take a closer look at machine-learning’s commercial applications. In that race, the organic intelligences behind this letter have chosen to take an offensive approach. 

“Basically, we agree with the advocates of robot weapons that humans are not very good at obeying the laws of war. We believe also that humans *armed with robot weapons* will also not be very good at obeying the laws of war,” Russell said.

Could machines ever be so good at crystallizing public opinion?