8 October 2016

HOW TO STEAL, REVERSE ENGINEER, AND/OR REPRODUCE AN ARTIFICIAL INTELLIGENCE ‘BLACK BOX’

October 5, 2016

How To Steal, Reverse Engineer, And/Or Reproduce An Artificial Intelligence ‘Black Box’

That’s the title of Andy Greenberg’s September 30, 2016 article on the website, WIERED.com. He begins by noting that “in the burgeoning field of computer science known as machine learning, engineers often refer to the artificial intelligences they create as “black box” systems: Once a machine learning engine has been trained from a collection of example data to perform anything from facial recognition, to malware detection, it can take in queries — Whose face is that? Is this app safe? — and spit out answers without anyone, not even its creators, fully understanding the mechanics of the decision-making inside the box.”

“But,” Mr. Greenberg adds, “researchers are increasingly proving that even when the inner workings of those machine learning engines are inscrutable, they aren’t exactly secret. In fact,” he writes, “they’ve found that the guts of those black-boxes can be reverse-engineered; and, even fully reproduced — stolen — as one group of researchers puts it — with the very same methods used to create them.”

“In a paper they released earlier this month titled — “Stealing Machine Learning Models via Prediction APIs,” a team of scientists at Cornell Tech, the Swiss institute EPFL in Lausanne, and the University of North Carolina, detail how they were able to reverse engineer machine learning-trained AI’s output, they found they could produce software that was able to predict with near 100 percent accuracy — the responses of the AI’s they’d cloned, sometimes after a few thousand, or even just hundreds of queries.”

“You’re taking this black box, and through this very narrow interface, you can reconstruct its internals, [thus] reverse engineering the box,” said Ari Juels, a Cornell technology professor who worked on the project. “In some cases, you can actually do a perfect reconstruction.”

Taking The Innards Of A Black Box

“The trick,” the scientists told WIRED.com, “could be used against services offered by companies like Amazon, Google, Microsoft and BigML, that allow users to upload data into machine-learning engines, and publish, or share the resulting model online, in some cases — with a pay-by-the-query business model. The researchers’ method, which they call an extraction attack, could duplicate AI engines meant to be proprietary; or in some cases — even recreate the sensitive, private data an AI has been trained with,” Mr. Greenberg wrote. ‘Once you’ve recovered the model for yourself, you don’t have to pay for it and you can also get serious privacy breaches,” said Florian Tramer, the EPFL researcher who worked on the AI-project before taking a [new] position at Stanford University.

“In other cases, the technique might allow hackers to reverse engineer — then defeat machine-learning based security systems meant to filter spam and malware,” Tramer warned. “After a few hours’ work….you’d end up with an extracted model you could then evade — if it were used on a production system.”

“The researchers technique works by essentially using machine learning itself to reverse engineer machine learning software,” Mr. Greenberg writes. 

Stealing A Steak-Preference Predictor 

“The researchers tested their attack against two services: Amazon’s machine-learning platform; and, the online machine-learning service BigML,” Mr. Greenberg wrote. The researchers “tried to reverse engineer the AI models built on those platforms from a series of common data sets. On Amazon’s platform, for instance,” he notes, “they tried “stealing” an algorithm that predicts a person’s salary — based on demographic factors like their employment, marital status, and credit score, and another that tries to recognize one-through-ten numbers, based on images of handwritten digits. In the demographics case, they found that they could reproduce the model without any discernible difference after 1,485 queries, and just 650 queries in the digit recognition case.”

“On the BigML service” the researchers “tried their extraction technique on one algorithm that predicts German citizens’ credit scores, based on their demographics; and, on another that predicts how people like their steak cooked — rare, medium, or well done — based on their answers to other lifestyle questions. Replicating the credit score engine took just 1,150 queries, and copying the steak-preference predictor took just over 4,000 queries,” Mr. Greenberg wrote.

Having demonstrated the vulnerabilities of an AI ‘black box,’ as noted above, all is not lost. “Not every machine-learning algorithm is so easily reconstructed,” said Nicholas Papernot, a researcher at Penn State University, who worked on another machine-learning reverse engineering project earlier this year, Mr. Greenberg writes.

Amazon declined to discuss the vulnerabilities of their AI ‘black box,’in an “on-the-record” interview with WIRED.com; but, Mr. Greenberg added that when he and other WIRED representatives spoke to the researchers about these security gaps — Amazon told them that the chances of someone breaching their AI ‘black box’ “was reduced by the fact that Amazon doesn’t make its machine learning engines public, instead only allowing users to share access among collaborators.” “In other words,” Mr. Greenberg wrote — “the company warned — take care who you share your AI with.”

From Facial Recognition To Facial Reconstruction 

“Aside from merely stealing AI, the researchers warn that their attack — also makes it easier to reconstruct the often-sensitive data it’s trained on,” Mr. Greenberg notes. “The researchers pointed to a different paper [research] published late last year, showed it’s possible to reverse engineer facial recognition. AI that responds to images with guesses of each persons name. That method would send the target AI repeated test pictures, tweaking the images until they have honed in on the pictures that [the] machine learning engine was trained on; and, reproduced the actual face images without the researchers’ computer having ever actually seen them. By first performing their AI-stealing attack — before running the facial-reconstruction technique, they showed they could actually reassemble the face images far faster on their own stolen copy of AI, running on a computer they controlled, reconstructing 40 distinct faces in just 10 hours, compared to 16 hours when they performed the facial reconstruction on the original AI engine,” Mr. Greenberg wrote.

But, the researchers also warned of the potential for denial and deception with respect to this technique. The researchers warned that they “learned how to trick the original AI. When they applied that technique to the AI engines designed to recognize numbers or street signs for instance,” Mr. Greenberg warns, the researchers “found they could cause the engine to make incorrect judgments in between 84 percent,and 96 percent of cases.”

Mr. Greenberg ends with this warning; “The latest research into reconstructing machine learning engines, could make that deception even easier. And if that machine learning is applied to security-or-safety-of-critical tasks — like self-driving cars, or filtering malware, the ability to steal and analyze them could have troubling implications. Black-box, or not, it may be wise to consider keeping your AI out of sight.

The Vulnerabilities Demonstrated By The Effort Referred To In This WIRED.com Article Really Shouldn’t Surprise Anyone

Time after time, whether it is the ability to hack a stand-alone machine, or, now stealing an AI black box, playing defense in the cyber world is a fool’s errand…to a large degree. Yes, best cyber hygiene practices — two step authentication, frequently changing passwords, never opening an attachment or link unless you are sure it is genuine, and so on — solves about 80 percent of the problem of cyber theft, cyber espionage, etc., and so on. But, a determined hacker/s, cyber mafia’s, nation states, lone wolf’s etc., has the ability to breach almost any networked system. If a person built it — a person can almost always figure out a way to compromise the network. Remember, every Medieval castle in Europe was eventually successfully breached — and, more often than not — with the help of a trusted insider. 

What I did not see in Mr. Greenberg’s article was whether or not using this technique could be used — not to steal the AI; but, to piggy-back on the initial breach and search the network — unobserved — and copy or corrupt the data, without leaving digital fingerprints, or a digital trail. If they can, then we have an even bigger threat/vulnerability than Mr. Greenberg wrote about. 

Denial and deception has been practiced since the dawn of mankind. But, it is a skill-set that requires a lot of time, practice, and experience to pull off successfully. It is not a skill that can just be pulled off the shelf when needed. It requires much dedication, and skills honed over a long period of time. What I don’t know — is whether or not we are practicing this technique — but, we better be. If one does not practice and continually upgrade their ability to deny and deceive — then one becomes more vulnerable to becoming a victim of this kind of technique….because it becomes much, much harder to perceive and recognize, if you don’t also practice it yourself. Thus, increasing the probability of strategic surprise, and perhaps a catastrophic cyber attack. V/R, RCP.

No comments: