29 April 2018

Artificial Intelligence: Welcome to the Age of Disruptive Surprise

BRUCE E. PEASE

With the last few years of progress in artificial intelligence, it is hard to look forward without excitement…and apprehension. If you are paying attention at all, you are wondering what lies ahead. Will it be great? Will it be disastrous for many? Surely because we are inventing it, we have a good sense of what is being spawned? But no, this technology feels different. It almost feels like it is being discovered, rather than being invented. It will be big, it will impact our lives and our society. It will be disruptive…and we don’t know how.


I spent a career in intelligence learning the business of forecasting and warning, and I teach those things today. I learned that warning is easier than forecasting—usually you warn of vulnerabilities and possibilities, but you forecast likelihoods. Likelihoods are much harder to determine. On forecasting, I learned the hard way to be very humble. I learned that the word “probably” is overused…and the words “almost certainly” are rarely deserved when we are talking about anything over the horizon.

The difference between warning and forecasting plagues discussion about artificial intelligence. When such visionaries as Stephen Hawking and Elon Musk warn of what could happen…the world-changing hazards that might come with advanced artificial intelligence—“superintelligence,” to use Nick Bostrom’s term—it is worth paying serious attention. But when it comes to forecasting what will happen, it is easy to feel helpless in choosing between such credentialed observers as Ray Kurzweil and Rodney Brooks. Kurzweil, Google’s in-house futurist and director of engineering, calculates we will see a computer pass the Turing test – convincingly mimicking human intelligence – by 2029, and nanobots in the human brain will connect it to the cloud by the late 2030s. Brooks, former director of MIT’s Computer Science and AI Laboratory, says, “Mistaken predictions lead to fears of things that are not going to happen, whether it’s the wide-scale destruction of jobs…or the advent of AI that has values different from ours and might try to destroy us.”

I agree that there are many reasons to warn about where we may be headed with Artificial Intelligence. I believe this is the most important leap in technology since man discovered how to harness fire, and we are still struggling with fire’s demon side. Nearly every aspect of our lives is being touched by digital technologies, and everything digital will be affected by artificial intelligence. Our leaders’ decision-making in economics, law enforcement, and warfare will be accelerated by artificial intelligence, eventually being accelerated to a point where humans often will stand aside.

Waiting for human orders—waiting for our own plodding ability to perceive, grasp, and react—could mean losing a life-and-death competition. Our society, commerce, governance, and statecraft were built for the analog/industrial world. I see no sign they will catch up to the realities of a digital world, much less an AI-accelerated world, until after disasters wake us.

But the forecaster in me yearns to move beyond such warnings. When considering our likely future in an AI-accelerated world, what can I offer? I’m not even tempted to forecast the progress of the technology. Rather, from a career in supporting national security decision-making, my mind goes to that arena of our AI-accelerated future. When weighing the current pace of advances in autonomous systems, machine cognition, artificial intelligence, and machine-human partnership, here is the very short list of things I can forecast with confidence:

We will be surprised…strategically and repeatedly. It won’t be because we lack imagination—science fiction writers are doing important work framing the possibilities and making them relatable. It will be because we will have difficulty sorting the likely from the merely possible.
Not all the surprises will be negative. Indeed, most are likely to be positive. In science and technology, we tend to call positive surprises “breakthroughs.” But the bigger the breakthrough, the bigger the disruption. The nearest example might be the challenge for our economy to absorb more than 3 million truckers as driverless trucks become a reality. Imagine the jolt to our current economy with breakthroughs in clean, renewable energy, in curing cancer, or when we finally get flying cars.

Our nature to invest reactively rather than preemptively will continue to dominate our decision-making, so we will lurch from disruption to disruption. (Our current struggle with massive hacking events is instructive here.) Preempting the most dire possibilities will seem too expensive, too divisive or too different from today’s norms.

The disruptions will make us all wish we could slow the pace of change in this arena, but that will not feel like a viable option. We will always have our eye on competitors who are less sensitive to the human cost of AI-driven disruption.

As the visionaries and practitioners argue about what AI will and won’t be able to do, no controversy is more important than whether AI will pass the level of human cognition. Will it acquire the ability to understand…the ability to reason? This is the AI that “wakes up,” becomes self-aware, and I see no physical law prohibiting that milestone. We will see AI have enormous impact even short of that threshold, but the greatest dangers and opportunities lie on the other side of its awakening.

But forecasting that breakthrough—whether or when it will happen—seems out of reach for now. An artificial intelligence that is conscious is a good subject for warning, but a poor subject for forecasting, at least until we have a better notion of what “consciousness” is. If it happens, it will be a pivot-point in human history.

As I learned in intelligence, when forecasting is impossible but the stakes are high, it is often worthwhile to at least identify key indicators to watch for. As I watch the development of AI, there are three particular breakthroughs that I am watching for…three milestones in cognition that may foreshadow human-level reasoning:

When does a machine first appreciate the very special thing that is a fact? In logic, a single fact can trump a mountain of source material and pages of exquisite reasoning. Currently, no machine I know of can discern a fact from an assertion. My thermostat determines a fact—that the temperature in my den is 70° — but it doesn’t recognize it as one. If it had conflicting sources of temperature information, it would be flummoxed. The ability to sort fact from assertion is especially important because most scenarios of a super-intelligent AI breaking out of control have it first connecting to the internet as its source of unbounded knowledge and control. Unless anchored in facts, it would find the sheer volume of Elvis sightings convincing.

When does a machine make something of “the dog that didn’t bark?” In reasoning—in testing hypotheses—the absence of something that should be there can be meaningful to analysts. Right now, machine cognition is straining to make something of the information that it is fed. It doesn’t spend cycles noticing that something should be there but isn’t.

When does a machine deal with the question, “Why?” I don’t mean here that machines will continue to be unable to explain why they produced a particular answer. (We are likely to be able to teach them to show us how they developed the answer, and that will be a big deal in helping us to partner with AI.) But “Why?” is bigger than that more mechanical question of “How?” The human mind is “wired” to ask “Why?” It is essential to understanding cause and effect. The ape might understand that fire burns without understanding why it burns. But “Why?” is the question that allows the caveman to harness and create fire.

Warning, forecasting, and tracking indicators will be vital in harnessing this very disruptive technology. They may help us to react more productively to the disruptions when they come, perhaps restraining our human tendency to over-react to bad surprises. I also agree with MIT’s Max Tegmark, head of the Future of Life Institute, that the best way to avoid the most catastrophic possibilities of super-intelligent computers is to start shaping that future now. We have enough indicators already of AI’s awesome potential to start framing choices.

Bruce E. Pease is a former analyst and leader of analysis, with 30 years in the US intelligence community, including leading analysis on strategic weapons and emerging technologies. He served as Director of Intelligence Programs on the National Security Council in the Clinton Administration. He is also an experienced teacher of leadership, analysis, and ethics, and author of the forthcoming book, Leading Analysis. The views expressed here are his own and do not reflect the official views of CIA or any agency.

No comments: