19 September 2016

THE ‘MAN WITH A THOUSAND FACES’ CAN HIDE NO MORE: RESEARCHERS TRAIN ARTIFICIAL INTELLIGENCE (AI) TO DEFEAT FACE BLURRING TECHNOLOGIES — AN APPARENTLY, ALMOST ANYONE CAN DO IT — SIGNIFICANT PRIVACY/INTELLIGENCE/LAW ENFORCEMENT IMPLICATIONS


The ‘Man With A Thousand Faces’ Can Hide No More: Researchers Train Artificial Intelligence (AI) To Defeat Face Blurring Technologies — An Apparently, Almost Anyone Can Do It — Significant Privacy, Intelligence, Law Enforcement Implications

Peter Hess writes in the September 13, 2016 edition of PopularScience.com, that “machine learning researchers at Cornell Tech and the University of Texas at Austin, have developed software that makes it possible for users to recognize a person’s concealed face in photographs, or videos.” Mr. Hess adds that “the researchers used this software to defeat three different privacy tools,” citing aWIRED.com article, “including both blurring and pixelating technologies. By teaching the computer program to identify faces, they could match distorted images to intact ones. The researchers “were careful to emphasize that this technique does not enable them to reconstruct distorted images.”

Artificial Intelligence Can Overcome Multiple Types Of Obfuscation

“Even though researchers have employed sophisticated machine-learning techniques to train the software to identify faces, the technology they used is available to the average person,” Mr. Hess notes. “According to the researchers,” he adds, “their techniques call into question [just] how robust existing privacy software technologies are.”

“The most surprising thing, was how the simplest thing we tried worked so well,” Vitaly Shmatikov, one of the Cornell Tech researchers told Popular Science. Richard McPherson, Shmatikov’s student who collaborated on the research, emphasized how rudimentary some of their rural nets they used were: “One was almost a tutorial, the first one you download and play when you’re learning neural nets,” he told Popular Science.

As Lily Hay Newman wrote in the September 12, 2016 edition of WIRED.com, “AI Can Recognize Your Face Even If You’re Pixelated,” “as computer vision becomes increasingly robust, it’s starting to see things we can’t.” Ms. Newman writes that the researchers “used obfuscated text images that the neural networks hadn’t yet been exposed to in any form, to see whether the image recognition could identify faces, objects, and handwritten numbers. For some data sets, and masking techniques, the neural network success rates exceeded 80 percent, and even 90 percent. In the case of mosaicing, the more intensely pixelated images were, the lower the success rate got. But, their de-obfuscating machine learning software was often still in the 50 percent, to 75 percent range. The lowest success rate,” Ms. Newman writes, “was 17 percent, on a data set of celebrity faces obfuscated with the P3 redaction system. If the computers had been randomly guessing to identify the faces, shapes, and numbers, however, the researchers calculated the success rates for each test set would have been — at most 10 percent, and as low as a fifth of a percent, meaning even relatively low identification success rates were still far better than guessing.”

It is important to note that this ability to overcome blurry photos and/or obfuscation, did “not do reconstruction from scratch; and, can’t reverse the obfuscation to actually recreate pictures of the faces, or objects it’s identifying,” Ms. Newman wrote — though one would suspect as this technology/technique matures, those obstacles are likely to be overcome. “The technique,” she adds, “can only find what it’s looking for — not necessarily an exact image, but things it’s seen before, like a certain object, or a previously identified person’s face. For example,” she says, “in hours of CCTV footage from a train station with every passerby’s face blurred, it wouldn’t be able to identify every individual. But, if you suspected that a particular person had walked by at a particular time, it could spot that person’s face among the crowd — even in an obfuscated video.

Facial Recognition, Biometric Signatures, Ushering In A Wave Of New Technologies That Will Make It Much Harder To Stay Anonymous, Or Undercover/Covert

According to a June 3, 2016 article on CBSMarketWatch’s website, there are nearly 250 million video cameras installed throughout the world — and, that’s only the one’s that are overt. Facial recognition algorithms have become much more elegant; and, much better at identifying a face in the crowd.

Identity management, biometrics — DNA shedding, IRIS scanning, body scanning at airports and elsewhere, the texture of one’s veins, ears, hands, etc., digital fingerprints, digital exhaust, and so on, is making it increasingly difficult to protect one’s privacy, or maintain an agent’s undercover status. We could soon be in a situation where we can only use a HUMINT operative one time, because a second undercover attempt could prove too dangerous. As The Economist noted in an August 1, 2015 article, “anyone without years of credit-card and mobile phone history, utility bills, could be automatically pinpointed as a potential spy. For example, the publication noted, “if you claim you are visiting Russia for the first time, but your facial signatures, bone structure, retina scan, or DNA shows that you have been there before,” checkmate. Of course, two, or we can play that game as well.  

These advancements in facial recognition technology might even make the late, great actor Lon Chaney jealous. In the 1957 film noir classic, The Man Of A Thousand Faces, actor James Cagney played the title role which detailed the life of silent movie actor Lon Chaney. Mr. Chaney’s remarkable ability to transform his face and appearance — based on the character he was playing at the time, made him a legend on the movie set in the early days of Hollywood — with The Phantom of the Opera and The Hunchback of Notre Dame recognized as perhaps his best work.

Chris Smith, writing in the March 30, 2014 edition of TechRadar, “The Future Of Facial Recognition: Big Brother Or, Our New Best Friend,” writes, “that for better or worse, facial recognition has become the technological elephant in the room.” “Despite the continued advancements of software and algorithms — that would elicit gasps of awe in some of the other tech sectors, the ever-improving ability for machines to put a name to human faces is considered, in most cases, unwelcome.” “FaceBook’s recent unveiling of their DeepFace Research Paper,” he writes, discusses the “algorithm — which is still seemingly a long way from being integrated into consumer-facing element of the social network — that uses 3-D analysis of human faces to identify [a person of interest] — with a 97.25 percent success rate (and 99 percent under ideal circumstances); versus, our own human-brain/innate ability of an accuracy of 97.5 percent.”

Mr. Smith contended at the time he wrote his article 30 months ago that “we’re on the precipice of a coming out party for the technology. Facial recognition has been simmering beneath the surface, reticent, and also unwilling to show its real face to the world, but it is now almost ready to emerge.” It would seem some 30 months later, that time has mostly arrived. V/R, RCP

No comments: