22 April 2016


April 19, 2016 
MIT’s Minority-Report Style Algorithm Can Pick Up Suspicious Behavior

Cheyenne Macdonald writes on the April 18, 2016 website, London’s The Daily Mail Online, that “a new artificial intelligence (AI) system developed by researchers at the Massachusetts Institute of Technology (MIT) merges human and machine capabilities to hunt potential cyber attacks, and weed out false positives. Called AI2, the platform acts as a virtual analyst and has so far proven its ability to detect 85 percent of (cyber) attacks. As the system presents its findings to human analysts, feedback is incorporated to continually improve its detection rates,” Ms. Macdonald wrote.
Currently, “security systems are typically grouped into one of two categories: human, or machine,” Ms. Macdonald noted. “But, the new platform developed by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL); and, machine-learning start-up, PatternEx, combines those two world’s. AI2 uses three, unsupervised, machine-learning techniques to narrow down the amount of information it presents to analysts. Then, it creates a supervised model which continually improves its capabilities. The AI2 first combs through massive amounts of data, and clusters these into meaningful patterns. This allows it to detect suspicious activity, which is then presented to the human analysts for confirmation. The feedback is then worked into the model to be applied to the next data set.”

“During the tests,” The Daily Mail Online reports, “the system scored 3.6 billion ‘log lines’ of data; and, was able to detect attacks 85 percent of the time.” “You can think about the system as a ‘virtual analyst,’ says CSAIL research scientist Kalyan Veermamachaeni, who developed AI2, with Ignacio Arnaldo, a Chief Data Scientist at PatternEX, and former CSAIL postdoc. “It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates — significantly, and rapidly.”
“On its first day of training,” Ms. Macdonald wrote, “the researchers say AI2 presents the experts with 200 of what it deems to be the most abnormal events. In just a few days, the system will be presenting analysis with only 30, or 40 events — thanks to its self-improving capabilities.” 

“This paper brings together the strengths of the analyst intuition, and machine learning; and, ultimately drives down both false positives, and false neggatives,” said Nifesh Chawla, the Frank M. Fiemann Professor of Computer Science at the University of Notre Dame. “This research has the potential to become a line of defense against such attacks as fraud, service abuse, and account takeover, which are major challenges faced by consumer-facing systems’ ‘The system can tackle billions of data each day, sorting the features into ‘normal,’ and ‘abnormal,’ groups.”

“The most attacks the system detects, the more analyst feedback it receives, which, in turn, improves the accuracy of future predictions,” Veeramachanei says. “The human-machine interaction creates a beautiful cascading effect.”

Pretty impressive, and lots of potential good here. Sure, the system isn’t ‘bullet-proof,’ few things are. But, an 85 percent solution is probably a lot better than most companies are currently doing; and, is probably good enough fr many. How user-friendly this system is, how soon it can actually be in use, etc., were not discussed.

No comments: