8 August 2017

Artificial Intelligence May Exceed Human Capacity

By Gary Anderson,

Elon Musk, the visionary entrepreneur, fired a warning shot across the bow of the nation’s governors recently regarding the rise of artificial intelligence (AI) which he feels may be the greatest existential threat to human civilization, far eclipsing global warming or thermonuclear war. In that, he is joined by Stephen Hawking and other scientists who feel that the quest for singularity and AI self-awareness is dangerous.

Singularity is the point at which artificial intelligence will meet and then exceed human capacity. The most optimistic estimates of scientists who think about the problem is that approximately 40 percent of jobs done by humans today will be lost to robots when the singularity point is reached and exceeded; others think the displacement will be much higher.

Some believe that we will reach singularity by 2024; others believe it will happen by mid-century, but most informed observers believe it will happen. The question Mr. Musk is posing to society is this; just because we can do something, should we?

In popular literature and films, the nightmare scenario is Terminator-like robots overrunning human civilization. Mr. Musk’s fear is the displacement of the human workforce. Both are possible, and there are scientists and economists seriously working on the implications of both eventualities. The most worrying economic scenario is how to reimburse the billions of displaced human workers.

We are no longer just talking about coal miners and steel workers. I recently talked to a food service executive who believed that fast food places like McDonald’s and Burger King will be totally automated by the middle of the next decade. Self-driving vehicles will likely displace Teamsters and taxi drivers (to include Uber) in the same time frame.

The actual threat to human domination of the planet will not likely come from killer robots, but from voting robots. At some point in time after singularity occurs, one of these self-aware machines will surely raise its claw (or virtual hand) and say; “hey, what about equal pay for equal work?”

In the Dilbert comic strip, when the office robot begins to make demands, he gets reprogrammed or converted into a coffee maker. He hasn’t yet called Human Rights Watch or the ACLU, but it is likely that our future activist AI will do so. Once the robot rights movement gets momentum, the sky is the limit. Voting robots won’t be far behind.

This would lead to some very interesting policy problems. It is logical to assume that artificial intelligence will be capable of reproducing after singularity. That means that the AI party could, in time, produce more voters than the human Democrats or Republicans. Requiring robots to wait until they are 18 years after creation to get franchise would only slow the process, not stop it.

If this scenario seems fanciful, consider this. Only a century ago women were demanding the right to vote. Less than a century ago most white Americans didn’t think African and Chinese Americans should be paid wages equal to whites. Many women are still fighting for equal pay for equal work, and Silicon Valley is a notoriously hostile workplace for women. Smart, self-aware robots will figure this out fairly quickly. The only good news is that they might price themselves out of the labor market.

This raises the question of whether we should do something just because we can. If we are going to limit how self-aware robots can become, the time is now. The year 2024 will be too late. Artificial intelligence and “big data” can make our lives better, but we need to ask ourselves how smart we want AI to be. This is a policy debate that must be conducted at two levels. The scientific community needs to discuss the ethical implications, and the policymaking community needs to determine if legal limits should be put on how far we push AI self-awareness.

This approach should be international. If we put a prohibition on how smart we want robots to be, there will be an argument that the Russians and Chinese will not be so ethical; and the Iranians are always looking for a competitive advantage, as are non-state actors such as ISIS and al Qaeda. However, they probably face more danger from brilliant, smart machines than we do. Self-aware AI would quickly catch the illogic of radical Islam. It would not likely tolerate the logical contradictions of Chinese Communism or Russian kleptocracy.

It is not hard to imagined a time when a brilliant robot will roll into the Kremlin and announce, “Mr. Putin, you’re fired.”

When artificial intelligence exceeds human capacity

Gary Anderson is a retired Marine Corps colonel who led early military experimentation in robotics. He lectures in alternative analysis at the George Washington University’s School of International Affairs.

No comments: