17 December 2019

Stanford University finds that AI is outpacing Moore’s Law

By Cliff Saran
Source Link

Stanford University’s AI Index 2019 annual report has found that the speed of artificial intelligence (AI) is outpacing Moore’s Law.

How AI is putting the 'human' back into Human Resources

Discover how the allocation of work by algorithm might have advantages for workers as well as employers, how AI is proving its value for HR and how data analytics is being used to support expansion and development.

I agree to TechTarget’s Terms of Use, Privacy Policy, and the transfer of my information to the United States for processing to provide me with relevant information as described in our Privacy Policy.

I agree to my information being processed by TechTarget and its Partners to contact me via phone, email, or other means regarding information relevant to my professional interests. I may unsubscribe at any time.


Moore’s Law maps out how processor speeds double every 18 months to two years, which means application developers can expect a doubling in application performance for the same hardware cost.

But the Stanford report, produced in partnership with McKinsey & Company, Google, PwC, OpenAI, Genpact and AI21Labs, found that AI computational power is accelerating faster than traditional processor development. “Prior to 2012, AI results closely tracked Moore’s Law, with compute doubling every two years.,” the report said. “Post-2012, compute has been doubling every 3.4 months.

The study looked at how AI algorithms have improved over time, by tracking the progress of the ImageNet image identification program. Given that image classification methods are largely based on supervised machine learning techniques, the report’s authors looked at how long it takes to train an AI model and associated costs, which they said represents a measurement of the maturity of AI development infrastructure, reflecting advances in software and hardware.

Their research found that over 18 months, the time required to train a network on cloud infrastructure for supervised image recognition fell from about three hours in October 2017 to about 88 seconds in July 2019. The report noted that data on ImageNet training time on private cloud instances was in line with the public cloud AI training time improvements.

The report’s authors used the ResNet image classification model to assess how long it takes algorithms to achieve a high level of accuracy. In October 2017, 13 days of training time were required to reach just above 93% accuracy. The report found that training an AI-based image classification over 13 days to achieve 93% accuracy would have cost about $2,323 in 2017.

The study reported that the latest benchmark available on Stanford DAWNBench , using a cloud TPU on GCP to run the ResNet model to attain image classification accuracy slightly above 93% accuracy, cost just over $12 in September 2018.

Read more about artificial intelligence

Google Cloud has expanded its committed use discount plan to include GPU, TPU and local SSD resources to spark more AI and machine learning workloads.

Huawei aims to speed up AI training times with the launch of a new processor in China and a new AI computing framework.

The report also explored how far computer vision had progressed, looking at innovative algorithms that push the limits of automatic activity understanding, which can recognise human actions and activities from videos using the ActivityNet Challenge.

One of the tasks in this challenge, called Temporal Activity Localisation, uses a long video sequences that depict more than one activity, and the algorithm is asked to find a given activity. Today, algorithms can accurately recognise hundreds of complex human activities in real time, but the report found that much more work is needed.

“After organising the International Activity Recognition Challenge (ActivityNet) for the last four years, we observe that more research is needed to develop methods that can reliably discriminate activities, which involve fine-grained motions and/or subtle patterns in motion cues, objects and human-object interactions,” said Bernard Ghanem, associate professor of electrical engineering at King Abdullah University of Science and Technology, in the report.

“Looking forward, we foresee the next generation of algorithms to be one that accentuates learning without the need for excessively large manually curated data. In this scenario, benchmarks and competitions will remain a cornerstone to track progress in this self-learning domain.”

No comments: