16 August 2018

DeepLocker: Artificially Enhanced Malware Is Coming; And, It Is The Equivalent Of A Digital…Weapon Of Mass Disruption


I have written several articles in the past year on artificially-enhanced (AI) malware and the profound threat it will pose as we move deeper into late 2018 and into 2019. We are already seeing artificially-enhanced malware for sale on the Dark Web, though as of now it isn’t cheap to purchase — for the really special ‘stuff,’ around $25K per copy, in the digital underbelly of the Dark Web. Charlie Osborne posted an August 8, 2018 article on the security and technology website, ZeroDay, warning that some day soon, a visual image of our face, could become the trigger to launch artificially enhanced malware.”

The use and employment of artificial intelligence in the digital realm is a double-edged sword. While AI can be used to “detect and combat malware,” Ms. Osborne noted, the darker digital angels of our nature will also take advantage of — and exploit this new domain for nefarious purposes. And of course, we here in the U.S. are the most vulnerable to this new, emerging cyber threat. We are a society that is not network enabled; but, network dependent. The more we embrace the Internet-of-Things (IoT), ‘smart-cars, smart homes, etc., the larger our digital footprint becomes; and, the more vulnerable we are to suffering a nasty digital surprise. And, the standard anti-virus software that most of us employ is not much more than a speed-bump — when AI-enhanced malware is used.

According to IBM, the ‘AI-era’ could result [likely will] in weaponized AI. “In order to study how AI could one day become a new tool in the arsenal of [cyber] threat actors, IBM has developed an AI-empowered attack tool,” Ms. Osborne wrote. Dubbed DeepLocker, the AI-powered malware is “highly [elegantly/strategically] targeted,” according to IBM researchers. “The malware, carried along by systems such as video-teleconferencing software, is dormant until it reaches a specific [targeted/intended] victim, who is identified through factors such as facial recognition, geolocation, voice recognition; and potentially, the analysis of data gleaned from sources such as online trackers and social media,” Ms. Osborne explained. Once the target has been acquired, DeepLocker launches its attack.” “You can think of this capability as similar to a sniper, in contrast to the “spray and pray approach of traditional malware,” the IBM researchers said. “It is designed to be stealthy, and fly under the radar, avoiding detection until the very last moment when a specific target has been recognized.”

“DeepLocker’s Deep Neural Network (DNN) model stipulates “trigger conditions” to execute a payload,” Ms. Osborne wrote. “If these conditions are not met — and the target is not found — then the malware remains locked up,” which IBM said, “makes the malware almost impossible to reverse engineer.” “Finding a target, triggering a key, and executing [deploying] a payload may bring to mind “if this, then that” programming model. However, the DNN AI-model, is far from convoluted, and difficult to decipher.”

“What makes this AI-powered malware particularly dangerous,” Ms. Osborne warned, is that it empowers a rudimentary, run-of-the-mill hacker to achieve elite status ‘instantly’ and — on par with a nation-state — with the capability to infect millions of systems/devices, before it is identified/detected. IBM acknowledged that this type of malware hasn’t yet been seen in the wild; but, it is likely only a matter of time — and, not very far down the road. I would guess that we’ll likely see this kind of malware in the wild sometime in 2019, if not sooner. What makes AI-enhanced malware particularly worrisome, is that it is essentially a digital version of a chameleon. The overwhelming majority of our anti-virus software and cyber security/protection is based on pattern and signature recognition. Think of a cyber virus, and cyber defense in medical terms. Doctors look for viral signatures, or patterns/symptoms to diagnose what ails us. That is similar to what most of our anti-virus and cyber security protection software does. AI-enhanced malware, however, cleverly — changes its pattern, and signature — in an attempt to avoid detection. And, it hides…when it thinks it is under surveillance. AI-enhanced malware will also allow the hacker, or nefarious entity to not only clandestinely infect our systems/devices; but, to also digitally impersonate/masquerade as us — and put out damaging emails that appear to originate or come from us.

Cyber hackers, even those who will be empowered by artificially-enhanced malware, aren’t 10-feet tall. But, as with most technological advancements, the darker angels of our nature seem to be able to grab the upper hand in the beginning. Build a cyber ‘mouse-trap,’ and digital mice will find a way around it. Cyber security, and national security professionals have been warning for more than a decade about the potential for a Cyber Pearl Harbor. Artificially-enhanced malware significantly raises the potential that such a threat becomes a reality. The cyber black hats won’t always have the advantage. But, this kind of technology is a game-changer; and, can significantly alter the playing field. A sick and twisted Cyber ‘Dr. No,’ will find AI-enhanced malware a true digital Weapon-Of-Mass-Disruption (WMD).

No comments: