9 May 2014

STEPHEN HAWKING WARNS OF POSSIBLE DIRE THREAT TO MANKIND: ARTIFICIAL INTELLIGENCE MIGHT BE HUMANITY’S WORST MISTAKE

May 5, 2014 

Stephen Hawking Warns Of Possible Dire Threat To Mankind: Artificial Intelligence, He Says, Might Be Humanity’s Worst Mistake


Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek had an article in London’s, The Independent, regarding the state of artificial intelligence (AI), and where we might be headed in the future. Hawking is the Director of Research, at the Department of Applied Mathematics and Theoretical Physics at Cambridge; and, a 2012 Fundamental Physics Prize laureate for his work on quantum gravity; Stuart Russell is a computer science professor at the University California, Berkeley, and co-author of ‘Artificial Intelligence: A Modern Approach;’ Max Tegmark is a physics professor at the Massachusetts Institute of Technology (MIT), and the author of ‘Our Mathematical Universe;” and, Frank Wilczek is a physics professor at MIT and a 2004 Nobel laureate for his work on the strong nuclear force.

“With the Hollywood blockbuster Transcendence, currently playing in cinemas, showcasing clashing visions for the future of humanity,” the authors write, “it’s tempting to dismiss the notion of highly intelligent machines as mere science fiction. But, this would be a mistake,” the authors add, “and potentially our worst mistake in history. ” AI research is now progressing rapidly,” they write. “Recent landmarks such as self-driving cars, a computer winning at Jeopardy; and, the digital personal assistants Siri, Google Now, and Cortana, are merely symptoms of an IT arms race — fuelled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring,” they warn.

“The potential benefits are huge, everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide; but, the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest even in human history,” and if we aren’t careful, “its last,” the authors contend.

“In the near-term,” they add, “world militaries are considering autonomous-weapons systems that can choose and eliminate targets,” and substantial research and development investments are being made across the globe in this fertile area. “In the medium-term,” as emphasized by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, “AI may transform our economy to bring both great wealth and great dislocation.”

“Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organized in ways that perform even more advanced computations — than the arrangements in human brains. An explosive transition is possible, although it might play out differently from the movie: as Irving Good realized in 1965, machines with super-human intelligence — could repeatedly improve their design even further, triggering what Vernor Vinge called “singularity” and Johnny Depp’s movie character calls “transcendence.”

“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all. “

“So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome — right? Wrong. If a superior alien civilization sent us a message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here — we’ll leave the lights on?” “Probably not, – but this is more or less what is happening with AI. Although we are facing potentially the best, or worst thing to happen to humanity in history, little serious research is devoted to these issues — outside non-profit institutes such as the Cambridge Center for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits [AI], and avoiding the risks.”

I certainly wouldn’t take issue with these gentlemen regarding the significance and potentially eye-watering advancements that AI may have to offer to humanity. And, they certainly raise an extremely important question about whether or not we have adequately “war-gamed” the risks, benefits, and unintended consequences of significant advances in AI. Social media, crowdsourcing and the ‘wisdom of the crowd,’ via an Internet of Things is allowing for unprecedented advances across a number of domains: computing, robotics (miniature and micro), genetics, nanotechnology, autonomous systems, and big data analytics are all racing ahead of Moore’s Law.

As The Daily Galaxy noted back on March 25, 2010, “AI is becoming the stuff of sci-fi reality: The Mars rover can now select the rocks that look the most promising for signs of life. A robot that can open doors and find electrical outlets to recharge itself; computer viruses that no one can stop; Predator drones, which though still remotely controlled by humans, come close to a machine that can kill autonomously.” That was written four years ago. Now we have autonomous systems that can interact with each other — without requiring human intervention and, sensors and systems that can activate based on target activity — albeit in limited fashion — for now.

“The key factor in singularity scenarios,” the Daily Galaxy wrote, “is the positive- feedback loop for self improvement: once something is even slightly smarter than humanity, it can start to improve itself, or design new intelligence faster than we can — leading to an intelligence explosion designed by something that isn’t us.” Artificial intelligence will surpass human intelligence after 2020, wrote Vernor Vinge, a world renowned pioneer in AI, who has warned the risks and opportunities that an electronic super-intelligence would offer mankind. Vinge wrote his 1993 manifesto: “The Coming Technological Singularity,” in which he argues that exponential growth [and advancements] in technology — means a point will be reached [in the not too distant future] where the consequences are unknown.”

George Dvorsky, in the April 1, 2013 io9, discussed the potential ramifications of leap-ahead advancements in AI in an article titled, “How Much Longer Before Our First AI Catastrophe?” He wrote, “when singularity hits, it will be like, in the words of mathematician I.J. Good — “an intelligence explosion — and, it will indeed hit us like a bomb. Human control will forever be relegated to the sidelines, in whatever form that might take. A pre-Singularity AI catastrophe, on the other hand, will be containable. But just barely. It’ll likely arise from an expert system, or super sophisticated algorithm run amok. And the worry is not so much power — which is definitely part of the equation — but at speed at which it will inflict the damage. By the time we have a grasp on what’s going on, something terrible may have happened.”

“It is difficult to know,” he writes, “exactly how, when, or where are true first AI catastrophe will occur but, we’re still decades off. Our infrastructure is still not integrated, or robust enough to allow for something really terrible to happen. But, by the mid-2040s (if not sooner), our highly digital and increasingly interconnected world, will be susceptible to these sorts of problems.”

Rich and interesting, with lots to think about. V/R, RCP

No comments: